00:00:00.000 Started by upstream project "autotest-per-patch" build number 126211 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.058 using credential 00000000-0000-0000-0000-000000000002 00:00:00.059 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.082 Fetching changes from the remote Git repository 00:00:00.084 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.114 Using shallow fetch with depth 1 00:00:00.114 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.114 > git --version # timeout=10 00:00:00.165 > git --version # 'git version 2.39.2' 00:00:00.165 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.208 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.208 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.738 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.752 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.767 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.767 > git config core.sparsecheckout # timeout=10 00:00:04.779 > git read-tree -mu HEAD # timeout=10 00:00:04.797 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.815 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.815 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.941 [Pipeline] Start of Pipeline 00:00:04.957 [Pipeline] library 00:00:04.959 Loading library shm_lib@master 00:00:04.959 Library shm_lib@master is cached. Copying from home. 00:00:04.974 [Pipeline] node 00:00:04.982 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.983 [Pipeline] { 00:00:04.996 [Pipeline] catchError 00:00:04.997 [Pipeline] { 00:00:05.009 [Pipeline] wrap 00:00:05.018 [Pipeline] { 00:00:05.027 [Pipeline] stage 00:00:05.029 [Pipeline] { (Prologue) 00:00:05.047 [Pipeline] echo 00:00:05.048 Node: VM-host-SM9 00:00:05.053 [Pipeline] cleanWs 00:00:05.061 [WS-CLEANUP] Deleting project workspace... 00:00:05.061 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.065 [WS-CLEANUP] done 00:00:05.231 [Pipeline] setCustomBuildProperty 00:00:05.320 [Pipeline] httpRequest 00:00:05.349 [Pipeline] echo 00:00:05.350 Sorcerer 10.211.164.101 is alive 00:00:05.359 [Pipeline] httpRequest 00:00:05.363 HttpMethod: GET 00:00:05.364 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.364 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.370 Response Code: HTTP/1.1 200 OK 00:00:05.370 Success: Status code 200 is in the accepted range: 200,404 00:00:05.371 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.853 [Pipeline] sh 00:00:08.131 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.144 [Pipeline] httpRequest 00:00:08.168 [Pipeline] echo 00:00:08.169 Sorcerer 10.211.164.101 is alive 00:00:08.179 [Pipeline] httpRequest 00:00:08.184 HttpMethod: GET 00:00:08.185 URL: http://10.211.164.101/packages/spdk_d8f06a5fec162a535d3a23e9ae8fc57eb4431c82.tar.gz 00:00:08.185 Sending request to url: http://10.211.164.101/packages/spdk_d8f06a5fec162a535d3a23e9ae8fc57eb4431c82.tar.gz 00:00:08.211 Response Code: HTTP/1.1 200 OK 00:00:08.211 Success: Status code 200 is in the accepted range: 200,404 00:00:08.212 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_d8f06a5fec162a535d3a23e9ae8fc57eb4431c82.tar.gz 00:01:45.440 [Pipeline] sh 00:01:45.717 + tar --no-same-owner -xf spdk_d8f06a5fec162a535d3a23e9ae8fc57eb4431c82.tar.gz 00:01:49.026 [Pipeline] sh 00:01:49.304 + git -C spdk log --oneline -n5 00:01:49.304 d8f06a5fe scripts/pkgdep: Drop support for downloading shfmt binaries 00:01:49.304 719d03c6a sock/uring: only register net impl if supported 00:01:49.304 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:49.304 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:49.304 6c7c1f57e accel: add sequence outstanding stat 00:01:49.325 [Pipeline] writeFile 00:01:49.343 [Pipeline] sh 00:01:49.624 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:49.636 [Pipeline] sh 00:01:49.915 + cat autorun-spdk.conf 00:01:49.915 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.915 SPDK_TEST_NVMF=1 00:01:49.915 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.915 SPDK_TEST_USDT=1 00:01:49.915 SPDK_TEST_NVMF_MDNS=1 00:01:49.915 SPDK_RUN_UBSAN=1 00:01:49.915 NET_TYPE=virt 00:01:49.915 SPDK_JSONRPC_GO_CLIENT=1 00:01:49.915 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.922 RUN_NIGHTLY=0 00:01:49.924 [Pipeline] } 00:01:49.943 [Pipeline] // stage 00:01:49.960 [Pipeline] stage 00:01:49.962 [Pipeline] { (Run VM) 00:01:49.977 [Pipeline] sh 00:01:50.257 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:50.257 + echo 'Start stage prepare_nvme.sh' 00:01:50.257 Start stage prepare_nvme.sh 00:01:50.257 + [[ -n 4 ]] 00:01:50.257 + disk_prefix=ex4 00:01:50.257 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:50.257 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:50.257 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:50.257 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.257 ++ SPDK_TEST_NVMF=1 00:01:50.257 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.257 ++ SPDK_TEST_USDT=1 00:01:50.257 ++ SPDK_TEST_NVMF_MDNS=1 00:01:50.257 ++ SPDK_RUN_UBSAN=1 00:01:50.257 ++ NET_TYPE=virt 00:01:50.257 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:50.257 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.257 ++ RUN_NIGHTLY=0 00:01:50.257 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:50.257 + nvme_files=() 00:01:50.257 + declare -A nvme_files 00:01:50.257 + backend_dir=/var/lib/libvirt/images/backends 00:01:50.257 + nvme_files['nvme.img']=5G 00:01:50.257 + nvme_files['nvme-cmb.img']=5G 00:01:50.257 + nvme_files['nvme-multi0.img']=4G 00:01:50.257 + nvme_files['nvme-multi1.img']=4G 00:01:50.257 + nvme_files['nvme-multi2.img']=4G 00:01:50.257 + nvme_files['nvme-openstack.img']=8G 00:01:50.257 + nvme_files['nvme-zns.img']=5G 00:01:50.257 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:50.257 + (( SPDK_TEST_FTL == 1 )) 00:01:50.257 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:50.257 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:50.257 + for nvme in "${!nvme_files[@]}" 00:01:50.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:50.257 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.257 + for nvme in "${!nvme_files[@]}" 00:01:50.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:50.257 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:50.257 + for nvme in "${!nvme_files[@]}" 00:01:50.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:50.515 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:50.515 + for nvme in "${!nvme_files[@]}" 00:01:50.515 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:50.515 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:50.515 + for nvme in "${!nvme_files[@]}" 00:01:50.515 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:50.515 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.774 + for nvme in "${!nvme_files[@]}" 00:01:50.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:50.774 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:51.033 + for nvme in "${!nvme_files[@]}" 00:01:51.033 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:51.600 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:51.600 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:51.600 + echo 'End stage prepare_nvme.sh' 00:01:51.600 End stage prepare_nvme.sh 00:01:51.612 [Pipeline] sh 00:01:51.891 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:51.891 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:01:51.891 00:01:51.891 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:51.891 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:51.891 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:51.891 HELP=0 00:01:51.891 DRY_RUN=0 00:01:51.891 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:51.891 NVME_DISKS_TYPE=nvme,nvme, 00:01:51.891 NVME_AUTO_CREATE=0 00:01:51.892 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:51.892 NVME_CMB=,, 00:01:51.892 NVME_PMR=,, 00:01:51.892 NVME_ZNS=,, 00:01:51.892 NVME_MS=,, 00:01:51.892 NVME_FDP=,, 00:01:51.892 SPDK_VAGRANT_DISTRO=fedora38 00:01:51.892 SPDK_VAGRANT_VMCPU=10 00:01:51.892 SPDK_VAGRANT_VMRAM=12288 00:01:51.892 SPDK_VAGRANT_PROVIDER=libvirt 00:01:51.892 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:51.892 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:51.892 SPDK_OPENSTACK_NETWORK=0 00:01:51.892 VAGRANT_PACKAGE_BOX=0 00:01:51.892 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:51.892 FORCE_DISTRO=true 00:01:51.892 VAGRANT_BOX_VERSION= 00:01:51.892 EXTRA_VAGRANTFILES= 00:01:51.892 NIC_MODEL=e1000 00:01:51.892 00:01:51.892 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:51.892 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:55.178 Bringing machine 'default' up with 'libvirt' provider... 00:01:56.109 ==> default: Creating image (snapshot of base box volume). 00:01:56.109 ==> default: Creating domain with the following settings... 00:01:56.109 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721057090_ae44dccc6f8a1bfa4f31 00:01:56.109 ==> default: -- Domain type: kvm 00:01:56.109 ==> default: -- Cpus: 10 00:01:56.109 ==> default: -- Feature: acpi 00:01:56.109 ==> default: -- Feature: apic 00:01:56.109 ==> default: -- Feature: pae 00:01:56.109 ==> default: -- Memory: 12288M 00:01:56.109 ==> default: -- Memory Backing: hugepages: 00:01:56.109 ==> default: -- Management MAC: 00:01:56.109 ==> default: -- Loader: 00:01:56.109 ==> default: -- Nvram: 00:01:56.109 ==> default: -- Base box: spdk/fedora38 00:01:56.109 ==> default: -- Storage pool: default 00:01:56.109 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721057090_ae44dccc6f8a1bfa4f31.img (20G) 00:01:56.109 ==> default: -- Volume Cache: default 00:01:56.109 ==> default: -- Kernel: 00:01:56.109 ==> default: -- Initrd: 00:01:56.109 ==> default: -- Graphics Type: vnc 00:01:56.109 ==> default: -- Graphics Port: -1 00:01:56.109 ==> default: -- Graphics IP: 127.0.0.1 00:01:56.109 ==> default: -- Graphics Password: Not defined 00:01:56.109 ==> default: -- Video Type: cirrus 00:01:56.109 ==> default: -- Video VRAM: 9216 00:01:56.109 ==> default: -- Sound Type: 00:01:56.109 ==> default: -- Keymap: en-us 00:01:56.109 ==> default: -- TPM Path: 00:01:56.109 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:56.109 ==> default: -- Command line args: 00:01:56.109 ==> default: -> value=-device, 00:01:56.109 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:56.109 ==> default: -> value=-drive, 00:01:56.109 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:56.109 ==> default: -> value=-device, 00:01:56.109 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.109 ==> default: -> value=-device, 00:01:56.109 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:56.109 ==> default: -> value=-drive, 00:01:56.109 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:56.109 ==> default: -> value=-device, 00:01:56.109 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.109 ==> default: -> value=-drive, 00:01:56.109 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:56.109 ==> default: -> value=-device, 00:01:56.109 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.109 ==> default: -> value=-drive, 00:01:56.109 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:56.109 ==> default: -> value=-device, 00:01:56.109 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:56.109 ==> default: Creating shared folders metadata... 00:01:56.109 ==> default: Starting domain. 00:01:57.482 ==> default: Waiting for domain to get an IP address... 00:02:15.570 ==> default: Waiting for SSH to become available... 00:02:15.570 ==> default: Configuring and enabling network interfaces... 00:02:18.101 default: SSH address: 192.168.121.88:22 00:02:18.101 default: SSH username: vagrant 00:02:18.101 default: SSH auth method: private key 00:02:20.648 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:27.202 ==> default: Mounting SSHFS shared folder... 00:02:29.100 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:29.100 ==> default: Checking Mount.. 00:02:30.478 ==> default: Folder Successfully Mounted! 00:02:30.478 ==> default: Running provisioner: file... 00:02:31.041 default: ~/.gitconfig => .gitconfig 00:02:31.606 00:02:31.606 SUCCESS! 00:02:31.606 00:02:31.606 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:31.606 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:31.606 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:31.606 00:02:31.615 [Pipeline] } 00:02:31.632 [Pipeline] // stage 00:02:31.642 [Pipeline] dir 00:02:31.643 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:02:31.645 [Pipeline] { 00:02:31.661 [Pipeline] catchError 00:02:31.663 [Pipeline] { 00:02:31.679 [Pipeline] sh 00:02:31.966 + vagrant ssh-config --host vagrant 00:02:31.966 + sed -ne /^Host/,$p 00:02:31.966 + tee ssh_conf 00:02:36.147 Host vagrant 00:02:36.147 HostName 192.168.121.88 00:02:36.147 User vagrant 00:02:36.147 Port 22 00:02:36.147 UserKnownHostsFile /dev/null 00:02:36.147 StrictHostKeyChecking no 00:02:36.147 PasswordAuthentication no 00:02:36.147 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:36.147 IdentitiesOnly yes 00:02:36.147 LogLevel FATAL 00:02:36.147 ForwardAgent yes 00:02:36.147 ForwardX11 yes 00:02:36.147 00:02:36.160 [Pipeline] withEnv 00:02:36.162 [Pipeline] { 00:02:36.177 [Pipeline] sh 00:02:36.452 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:36.452 source /etc/os-release 00:02:36.452 [[ -e /image.version ]] && img=$(< /image.version) 00:02:36.452 # Minimal, systemd-like check. 00:02:36.452 if [[ -e /.dockerenv ]]; then 00:02:36.452 # Clear garbage from the node's name: 00:02:36.452 # agt-er_autotest_547-896 -> autotest_547-896 00:02:36.452 # $HOSTNAME is the actual container id 00:02:36.452 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:36.453 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:36.453 # We can assume this is a mount from a host where container is running, 00:02:36.453 # so fetch its hostname to easily identify the target swarm worker. 00:02:36.453 container="$(< /etc/hostname) ($agent)" 00:02:36.453 else 00:02:36.453 # Fallback 00:02:36.453 container=$agent 00:02:36.453 fi 00:02:36.453 fi 00:02:36.453 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:36.453 00:02:36.463 [Pipeline] } 00:02:36.483 [Pipeline] // withEnv 00:02:36.493 [Pipeline] setCustomBuildProperty 00:02:36.508 [Pipeline] stage 00:02:36.510 [Pipeline] { (Tests) 00:02:36.528 [Pipeline] sh 00:02:36.801 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:37.106 [Pipeline] sh 00:02:37.382 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:37.396 [Pipeline] timeout 00:02:37.397 Timeout set to expire in 40 min 00:02:37.399 [Pipeline] { 00:02:37.414 [Pipeline] sh 00:02:37.691 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:38.256 HEAD is now at d8f06a5fe scripts/pkgdep: Drop support for downloading shfmt binaries 00:02:38.267 [Pipeline] sh 00:02:38.544 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:38.816 [Pipeline] sh 00:02:39.115 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:39.134 [Pipeline] sh 00:02:39.415 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:39.415 ++ readlink -f spdk_repo 00:02:39.415 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:39.415 + [[ -n /home/vagrant/spdk_repo ]] 00:02:39.415 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:39.415 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:39.415 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:39.415 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:39.415 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:39.415 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:39.415 + cd /home/vagrant/spdk_repo 00:02:39.415 + source /etc/os-release 00:02:39.415 ++ NAME='Fedora Linux' 00:02:39.415 ++ VERSION='38 (Cloud Edition)' 00:02:39.415 ++ ID=fedora 00:02:39.415 ++ VERSION_ID=38 00:02:39.415 ++ VERSION_CODENAME= 00:02:39.415 ++ PLATFORM_ID=platform:f38 00:02:39.415 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:39.415 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:39.415 ++ LOGO=fedora-logo-icon 00:02:39.415 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:39.415 ++ HOME_URL=https://fedoraproject.org/ 00:02:39.415 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:39.415 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:39.415 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:39.415 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:39.415 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:39.415 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:39.415 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:39.415 ++ SUPPORT_END=2024-05-14 00:02:39.415 ++ VARIANT='Cloud Edition' 00:02:39.415 ++ VARIANT_ID=cloud 00:02:39.415 + uname -a 00:02:39.415 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:39.415 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:39.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:39.981 Hugepages 00:02:39.981 node hugesize free / total 00:02:39.981 node0 1048576kB 0 / 0 00:02:39.981 node0 2048kB 0 / 0 00:02:39.981 00:02:39.981 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:39.981 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:39.981 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:39.981 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:39.981 + rm -f /tmp/spdk-ld-path 00:02:39.981 + source autorun-spdk.conf 00:02:39.981 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:39.981 ++ SPDK_TEST_NVMF=1 00:02:39.981 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:39.981 ++ SPDK_TEST_USDT=1 00:02:39.981 ++ SPDK_TEST_NVMF_MDNS=1 00:02:39.981 ++ SPDK_RUN_UBSAN=1 00:02:39.981 ++ NET_TYPE=virt 00:02:39.981 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:39.981 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:39.981 ++ RUN_NIGHTLY=0 00:02:39.981 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:39.981 + [[ -n '' ]] 00:02:39.981 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:39.981 + for M in /var/spdk/build-*-manifest.txt 00:02:39.981 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:39.981 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.240 + for M in /var/spdk/build-*-manifest.txt 00:02:40.240 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:40.240 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.240 ++ uname 00:02:40.240 + [[ Linux == \L\i\n\u\x ]] 00:02:40.240 + sudo dmesg -T 00:02:40.240 + sudo dmesg --clear 00:02:40.240 + dmesg_pid=5143 00:02:40.240 + sudo dmesg -Tw 00:02:40.240 + [[ Fedora Linux == FreeBSD ]] 00:02:40.240 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.240 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.240 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:40.240 + [[ -x /usr/src/fio-static/fio ]] 00:02:40.240 + export FIO_BIN=/usr/src/fio-static/fio 00:02:40.240 + FIO_BIN=/usr/src/fio-static/fio 00:02:40.240 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:40.240 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:40.240 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:40.240 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.240 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.240 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:40.240 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.240 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.240 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.240 Test configuration: 00:02:40.240 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.240 SPDK_TEST_NVMF=1 00:02:40.240 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:40.240 SPDK_TEST_USDT=1 00:02:40.240 SPDK_TEST_NVMF_MDNS=1 00:02:40.240 SPDK_RUN_UBSAN=1 00:02:40.240 NET_TYPE=virt 00:02:40.240 SPDK_JSONRPC_GO_CLIENT=1 00:02:40.240 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.240 RUN_NIGHTLY=0 15:25:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:40.240 15:25:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:40.240 15:25:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:40.240 15:25:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:40.240 15:25:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.240 15:25:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.240 15:25:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.240 15:25:35 -- paths/export.sh@5 -- $ export PATH 00:02:40.240 15:25:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.240 15:25:35 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:40.240 15:25:35 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:40.240 15:25:35 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721057135.XXXXXX 00:02:40.240 15:25:35 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721057135.kDbgmN 00:02:40.240 15:25:35 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:40.240 15:25:35 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:40.240 15:25:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:40.240 15:25:35 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:40.240 15:25:35 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:40.240 15:25:35 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:40.240 15:25:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:40.240 15:25:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.240 15:25:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:40.240 15:25:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:40.240 15:25:35 -- pm/common@17 -- $ local monitor 00:02:40.240 15:25:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.240 15:25:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.240 15:25:35 -- pm/common@25 -- $ sleep 1 00:02:40.240 15:25:35 -- pm/common@21 -- $ date +%s 00:02:40.240 15:25:35 -- pm/common@21 -- $ date +%s 00:02:40.240 15:25:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721057135 00:02:40.240 15:25:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721057135 00:02:40.240 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721057135_collect-vmstat.pm.log 00:02:40.240 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721057135_collect-cpu-load.pm.log 00:02:41.615 15:25:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:41.615 15:25:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:41.615 15:25:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:41.615 15:25:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:41.615 15:25:36 -- spdk/autobuild.sh@16 -- $ date -u 00:02:41.615 Mon Jul 15 03:25:36 PM UTC 2024 00:02:41.615 15:25:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:41.615 v24.09-pre-203-gd8f06a5fe 00:02:41.615 15:25:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:41.615 15:25:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:41.615 15:25:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:41.615 15:25:36 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:41.615 15:25:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:41.615 15:25:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.615 ************************************ 00:02:41.615 START TEST ubsan 00:02:41.615 ************************************ 00:02:41.615 using ubsan 00:02:41.615 15:25:36 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:41.615 00:02:41.615 real 0m0.000s 00:02:41.615 user 0m0.000s 00:02:41.615 sys 0m0.000s 00:02:41.615 15:25:36 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:41.615 ************************************ 00:02:41.615 15:25:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:41.615 END TEST ubsan 00:02:41.615 ************************************ 00:02:41.615 15:25:36 -- common/autotest_common.sh@1142 -- $ return 0 00:02:41.615 15:25:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:41.615 15:25:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:41.615 15:25:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:41.615 15:25:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:41.615 15:25:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.615 15:25:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:41.615 15:25:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:41.615 15:25:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:41.615 15:25:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:41.615 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:41.615 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.873 Using 'verbs' RDMA provider 00:02:55.026 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:09.968 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:09.968 go version go1.21.1 linux/amd64 00:03:09.968 Creating mk/config.mk...done. 00:03:09.968 Creating mk/cc.flags.mk...done. 00:03:09.968 Type 'make' to build. 00:03:09.968 15:26:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:09.968 15:26:03 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:09.968 15:26:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:09.968 15:26:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:09.968 ************************************ 00:03:09.968 START TEST make 00:03:09.968 ************************************ 00:03:09.968 15:26:03 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:09.968 make[1]: Nothing to be done for 'all'. 00:03:24.861 The Meson build system 00:03:24.861 Version: 1.3.1 00:03:24.861 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:24.861 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:24.861 Build type: native build 00:03:24.861 Program cat found: YES (/usr/bin/cat) 00:03:24.861 Project name: DPDK 00:03:24.861 Project version: 24.03.0 00:03:24.861 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:24.861 C linker for the host machine: cc ld.bfd 2.39-16 00:03:24.861 Host machine cpu family: x86_64 00:03:24.861 Host machine cpu: x86_64 00:03:24.861 Message: ## Building in Developer Mode ## 00:03:24.861 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:24.861 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:24.861 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:24.861 Program python3 found: YES (/usr/bin/python3) 00:03:24.861 Program cat found: YES (/usr/bin/cat) 00:03:24.861 Compiler for C supports arguments -march=native: YES 00:03:24.861 Checking for size of "void *" : 8 00:03:24.861 Checking for size of "void *" : 8 (cached) 00:03:24.861 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:24.861 Library m found: YES 00:03:24.862 Library numa found: YES 00:03:24.862 Has header "numaif.h" : YES 00:03:24.862 Library fdt found: NO 00:03:24.862 Library execinfo found: NO 00:03:24.862 Has header "execinfo.h" : YES 00:03:24.862 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:24.862 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:24.862 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:24.862 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:24.862 Run-time dependency openssl found: YES 3.0.9 00:03:24.862 Run-time dependency libpcap found: YES 1.10.4 00:03:24.862 Has header "pcap.h" with dependency libpcap: YES 00:03:24.862 Compiler for C supports arguments -Wcast-qual: YES 00:03:24.862 Compiler for C supports arguments -Wdeprecated: YES 00:03:24.862 Compiler for C supports arguments -Wformat: YES 00:03:24.862 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:24.862 Compiler for C supports arguments -Wformat-security: NO 00:03:24.862 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:24.862 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:24.862 Compiler for C supports arguments -Wnested-externs: YES 00:03:24.862 Compiler for C supports arguments -Wold-style-definition: YES 00:03:24.862 Compiler for C supports arguments -Wpointer-arith: YES 00:03:24.862 Compiler for C supports arguments -Wsign-compare: YES 00:03:24.862 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:24.862 Compiler for C supports arguments -Wundef: YES 00:03:24.862 Compiler for C supports arguments -Wwrite-strings: YES 00:03:24.862 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:24.862 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:24.862 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:24.862 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:24.862 Program objdump found: YES (/usr/bin/objdump) 00:03:24.862 Compiler for C supports arguments -mavx512f: YES 00:03:24.862 Checking if "AVX512 checking" compiles: YES 00:03:24.862 Fetching value of define "__SSE4_2__" : 1 00:03:24.862 Fetching value of define "__AES__" : 1 00:03:24.862 Fetching value of define "__AVX__" : 1 00:03:24.862 Fetching value of define "__AVX2__" : 1 00:03:24.862 Fetching value of define "__AVX512BW__" : (undefined) 00:03:24.862 Fetching value of define "__AVX512CD__" : (undefined) 00:03:24.862 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:24.862 Fetching value of define "__AVX512F__" : (undefined) 00:03:24.862 Fetching value of define "__AVX512VL__" : (undefined) 00:03:24.862 Fetching value of define "__PCLMUL__" : 1 00:03:24.862 Fetching value of define "__RDRND__" : 1 00:03:24.862 Fetching value of define "__RDSEED__" : 1 00:03:24.862 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:24.862 Fetching value of define "__znver1__" : (undefined) 00:03:24.862 Fetching value of define "__znver2__" : (undefined) 00:03:24.862 Fetching value of define "__znver3__" : (undefined) 00:03:24.862 Fetching value of define "__znver4__" : (undefined) 00:03:24.862 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:24.862 Message: lib/log: Defining dependency "log" 00:03:24.862 Message: lib/kvargs: Defining dependency "kvargs" 00:03:24.862 Message: lib/telemetry: Defining dependency "telemetry" 00:03:24.862 Checking for function "getentropy" : NO 00:03:24.862 Message: lib/eal: Defining dependency "eal" 00:03:24.862 Message: lib/ring: Defining dependency "ring" 00:03:24.862 Message: lib/rcu: Defining dependency "rcu" 00:03:24.862 Message: lib/mempool: Defining dependency "mempool" 00:03:24.862 Message: lib/mbuf: Defining dependency "mbuf" 00:03:24.862 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:24.862 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:24.862 Compiler for C supports arguments -mpclmul: YES 00:03:24.862 Compiler for C supports arguments -maes: YES 00:03:24.862 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:24.862 Compiler for C supports arguments -mavx512bw: YES 00:03:24.862 Compiler for C supports arguments -mavx512dq: YES 00:03:24.862 Compiler for C supports arguments -mavx512vl: YES 00:03:24.862 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:24.862 Compiler for C supports arguments -mavx2: YES 00:03:24.862 Compiler for C supports arguments -mavx: YES 00:03:24.862 Message: lib/net: Defining dependency "net" 00:03:24.862 Message: lib/meter: Defining dependency "meter" 00:03:24.862 Message: lib/ethdev: Defining dependency "ethdev" 00:03:24.862 Message: lib/pci: Defining dependency "pci" 00:03:24.862 Message: lib/cmdline: Defining dependency "cmdline" 00:03:24.862 Message: lib/hash: Defining dependency "hash" 00:03:24.862 Message: lib/timer: Defining dependency "timer" 00:03:24.862 Message: lib/compressdev: Defining dependency "compressdev" 00:03:24.862 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:24.862 Message: lib/dmadev: Defining dependency "dmadev" 00:03:24.862 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:24.862 Message: lib/power: Defining dependency "power" 00:03:24.862 Message: lib/reorder: Defining dependency "reorder" 00:03:24.862 Message: lib/security: Defining dependency "security" 00:03:24.862 Has header "linux/userfaultfd.h" : YES 00:03:24.862 Has header "linux/vduse.h" : YES 00:03:24.862 Message: lib/vhost: Defining dependency "vhost" 00:03:24.862 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:24.862 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:24.862 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:24.862 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:24.862 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:24.862 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:24.862 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:24.862 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:24.862 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:24.862 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:24.862 Program doxygen found: YES (/usr/bin/doxygen) 00:03:24.862 Configuring doxy-api-html.conf using configuration 00:03:24.862 Configuring doxy-api-man.conf using configuration 00:03:24.862 Program mandb found: YES (/usr/bin/mandb) 00:03:24.862 Program sphinx-build found: NO 00:03:24.862 Configuring rte_build_config.h using configuration 00:03:24.862 Message: 00:03:24.862 ================= 00:03:24.862 Applications Enabled 00:03:24.862 ================= 00:03:24.862 00:03:24.862 apps: 00:03:24.862 00:03:24.862 00:03:24.862 Message: 00:03:24.862 ================= 00:03:24.862 Libraries Enabled 00:03:24.862 ================= 00:03:24.862 00:03:24.862 libs: 00:03:24.862 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:24.862 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:24.862 cryptodev, dmadev, power, reorder, security, vhost, 00:03:24.862 00:03:24.862 Message: 00:03:24.862 =============== 00:03:24.862 Drivers Enabled 00:03:24.862 =============== 00:03:24.862 00:03:24.862 common: 00:03:24.862 00:03:24.862 bus: 00:03:24.862 pci, vdev, 00:03:24.862 mempool: 00:03:24.862 ring, 00:03:24.862 dma: 00:03:24.862 00:03:24.862 net: 00:03:24.862 00:03:24.862 crypto: 00:03:24.862 00:03:24.862 compress: 00:03:24.862 00:03:24.862 vdpa: 00:03:24.862 00:03:24.862 00:03:24.862 Message: 00:03:24.862 ================= 00:03:24.862 Content Skipped 00:03:24.862 ================= 00:03:24.862 00:03:24.862 apps: 00:03:24.862 dumpcap: explicitly disabled via build config 00:03:24.862 graph: explicitly disabled via build config 00:03:24.862 pdump: explicitly disabled via build config 00:03:24.862 proc-info: explicitly disabled via build config 00:03:24.862 test-acl: explicitly disabled via build config 00:03:24.862 test-bbdev: explicitly disabled via build config 00:03:24.862 test-cmdline: explicitly disabled via build config 00:03:24.862 test-compress-perf: explicitly disabled via build config 00:03:24.862 test-crypto-perf: explicitly disabled via build config 00:03:24.862 test-dma-perf: explicitly disabled via build config 00:03:24.862 test-eventdev: explicitly disabled via build config 00:03:24.862 test-fib: explicitly disabled via build config 00:03:24.862 test-flow-perf: explicitly disabled via build config 00:03:24.862 test-gpudev: explicitly disabled via build config 00:03:24.862 test-mldev: explicitly disabled via build config 00:03:24.862 test-pipeline: explicitly disabled via build config 00:03:24.862 test-pmd: explicitly disabled via build config 00:03:24.862 test-regex: explicitly disabled via build config 00:03:24.862 test-sad: explicitly disabled via build config 00:03:24.862 test-security-perf: explicitly disabled via build config 00:03:24.862 00:03:24.862 libs: 00:03:24.862 argparse: explicitly disabled via build config 00:03:24.862 metrics: explicitly disabled via build config 00:03:24.862 acl: explicitly disabled via build config 00:03:24.862 bbdev: explicitly disabled via build config 00:03:24.862 bitratestats: explicitly disabled via build config 00:03:24.862 bpf: explicitly disabled via build config 00:03:24.862 cfgfile: explicitly disabled via build config 00:03:24.862 distributor: explicitly disabled via build config 00:03:24.862 efd: explicitly disabled via build config 00:03:24.862 eventdev: explicitly disabled via build config 00:03:24.862 dispatcher: explicitly disabled via build config 00:03:24.862 gpudev: explicitly disabled via build config 00:03:24.862 gro: explicitly disabled via build config 00:03:24.862 gso: explicitly disabled via build config 00:03:24.862 ip_frag: explicitly disabled via build config 00:03:24.862 jobstats: explicitly disabled via build config 00:03:24.862 latencystats: explicitly disabled via build config 00:03:24.862 lpm: explicitly disabled via build config 00:03:24.862 member: explicitly disabled via build config 00:03:24.862 pcapng: explicitly disabled via build config 00:03:24.862 rawdev: explicitly disabled via build config 00:03:24.862 regexdev: explicitly disabled via build config 00:03:24.862 mldev: explicitly disabled via build config 00:03:24.862 rib: explicitly disabled via build config 00:03:24.862 sched: explicitly disabled via build config 00:03:24.862 stack: explicitly disabled via build config 00:03:24.862 ipsec: explicitly disabled via build config 00:03:24.862 pdcp: explicitly disabled via build config 00:03:24.862 fib: explicitly disabled via build config 00:03:24.862 port: explicitly disabled via build config 00:03:24.862 pdump: explicitly disabled via build config 00:03:24.862 table: explicitly disabled via build config 00:03:24.862 pipeline: explicitly disabled via build config 00:03:24.862 graph: explicitly disabled via build config 00:03:24.862 node: explicitly disabled via build config 00:03:24.862 00:03:24.862 drivers: 00:03:24.862 common/cpt: not in enabled drivers build config 00:03:24.863 common/dpaax: not in enabled drivers build config 00:03:24.863 common/iavf: not in enabled drivers build config 00:03:24.863 common/idpf: not in enabled drivers build config 00:03:24.863 common/ionic: not in enabled drivers build config 00:03:24.863 common/mvep: not in enabled drivers build config 00:03:24.863 common/octeontx: not in enabled drivers build config 00:03:24.863 bus/auxiliary: not in enabled drivers build config 00:03:24.863 bus/cdx: not in enabled drivers build config 00:03:24.863 bus/dpaa: not in enabled drivers build config 00:03:24.863 bus/fslmc: not in enabled drivers build config 00:03:24.863 bus/ifpga: not in enabled drivers build config 00:03:24.863 bus/platform: not in enabled drivers build config 00:03:24.863 bus/uacce: not in enabled drivers build config 00:03:24.863 bus/vmbus: not in enabled drivers build config 00:03:24.863 common/cnxk: not in enabled drivers build config 00:03:24.863 common/mlx5: not in enabled drivers build config 00:03:24.863 common/nfp: not in enabled drivers build config 00:03:24.863 common/nitrox: not in enabled drivers build config 00:03:24.863 common/qat: not in enabled drivers build config 00:03:24.863 common/sfc_efx: not in enabled drivers build config 00:03:24.863 mempool/bucket: not in enabled drivers build config 00:03:24.863 mempool/cnxk: not in enabled drivers build config 00:03:24.863 mempool/dpaa: not in enabled drivers build config 00:03:24.863 mempool/dpaa2: not in enabled drivers build config 00:03:24.863 mempool/octeontx: not in enabled drivers build config 00:03:24.863 mempool/stack: not in enabled drivers build config 00:03:24.863 dma/cnxk: not in enabled drivers build config 00:03:24.863 dma/dpaa: not in enabled drivers build config 00:03:24.863 dma/dpaa2: not in enabled drivers build config 00:03:24.863 dma/hisilicon: not in enabled drivers build config 00:03:24.863 dma/idxd: not in enabled drivers build config 00:03:24.863 dma/ioat: not in enabled drivers build config 00:03:24.863 dma/skeleton: not in enabled drivers build config 00:03:24.863 net/af_packet: not in enabled drivers build config 00:03:24.863 net/af_xdp: not in enabled drivers build config 00:03:24.863 net/ark: not in enabled drivers build config 00:03:24.863 net/atlantic: not in enabled drivers build config 00:03:24.863 net/avp: not in enabled drivers build config 00:03:24.863 net/axgbe: not in enabled drivers build config 00:03:24.863 net/bnx2x: not in enabled drivers build config 00:03:24.863 net/bnxt: not in enabled drivers build config 00:03:24.863 net/bonding: not in enabled drivers build config 00:03:24.863 net/cnxk: not in enabled drivers build config 00:03:24.863 net/cpfl: not in enabled drivers build config 00:03:24.863 net/cxgbe: not in enabled drivers build config 00:03:24.863 net/dpaa: not in enabled drivers build config 00:03:24.863 net/dpaa2: not in enabled drivers build config 00:03:24.863 net/e1000: not in enabled drivers build config 00:03:24.863 net/ena: not in enabled drivers build config 00:03:24.863 net/enetc: not in enabled drivers build config 00:03:24.863 net/enetfec: not in enabled drivers build config 00:03:24.863 net/enic: not in enabled drivers build config 00:03:24.863 net/failsafe: not in enabled drivers build config 00:03:24.863 net/fm10k: not in enabled drivers build config 00:03:24.863 net/gve: not in enabled drivers build config 00:03:24.863 net/hinic: not in enabled drivers build config 00:03:24.863 net/hns3: not in enabled drivers build config 00:03:24.863 net/i40e: not in enabled drivers build config 00:03:24.863 net/iavf: not in enabled drivers build config 00:03:24.863 net/ice: not in enabled drivers build config 00:03:24.863 net/idpf: not in enabled drivers build config 00:03:24.863 net/igc: not in enabled drivers build config 00:03:24.863 net/ionic: not in enabled drivers build config 00:03:24.863 net/ipn3ke: not in enabled drivers build config 00:03:24.863 net/ixgbe: not in enabled drivers build config 00:03:24.863 net/mana: not in enabled drivers build config 00:03:24.863 net/memif: not in enabled drivers build config 00:03:24.863 net/mlx4: not in enabled drivers build config 00:03:24.863 net/mlx5: not in enabled drivers build config 00:03:24.863 net/mvneta: not in enabled drivers build config 00:03:24.863 net/mvpp2: not in enabled drivers build config 00:03:24.863 net/netvsc: not in enabled drivers build config 00:03:24.863 net/nfb: not in enabled drivers build config 00:03:24.863 net/nfp: not in enabled drivers build config 00:03:24.863 net/ngbe: not in enabled drivers build config 00:03:24.863 net/null: not in enabled drivers build config 00:03:24.863 net/octeontx: not in enabled drivers build config 00:03:24.863 net/octeon_ep: not in enabled drivers build config 00:03:24.863 net/pcap: not in enabled drivers build config 00:03:24.863 net/pfe: not in enabled drivers build config 00:03:24.863 net/qede: not in enabled drivers build config 00:03:24.863 net/ring: not in enabled drivers build config 00:03:24.863 net/sfc: not in enabled drivers build config 00:03:24.863 net/softnic: not in enabled drivers build config 00:03:24.863 net/tap: not in enabled drivers build config 00:03:24.863 net/thunderx: not in enabled drivers build config 00:03:24.863 net/txgbe: not in enabled drivers build config 00:03:24.863 net/vdev_netvsc: not in enabled drivers build config 00:03:24.863 net/vhost: not in enabled drivers build config 00:03:24.863 net/virtio: not in enabled drivers build config 00:03:24.863 net/vmxnet3: not in enabled drivers build config 00:03:24.863 raw/*: missing internal dependency, "rawdev" 00:03:24.863 crypto/armv8: not in enabled drivers build config 00:03:24.863 crypto/bcmfs: not in enabled drivers build config 00:03:24.863 crypto/caam_jr: not in enabled drivers build config 00:03:24.863 crypto/ccp: not in enabled drivers build config 00:03:24.863 crypto/cnxk: not in enabled drivers build config 00:03:24.863 crypto/dpaa_sec: not in enabled drivers build config 00:03:24.863 crypto/dpaa2_sec: not in enabled drivers build config 00:03:24.863 crypto/ipsec_mb: not in enabled drivers build config 00:03:24.863 crypto/mlx5: not in enabled drivers build config 00:03:24.863 crypto/mvsam: not in enabled drivers build config 00:03:24.863 crypto/nitrox: not in enabled drivers build config 00:03:24.863 crypto/null: not in enabled drivers build config 00:03:24.863 crypto/octeontx: not in enabled drivers build config 00:03:24.863 crypto/openssl: not in enabled drivers build config 00:03:24.863 crypto/scheduler: not in enabled drivers build config 00:03:24.863 crypto/uadk: not in enabled drivers build config 00:03:24.863 crypto/virtio: not in enabled drivers build config 00:03:24.863 compress/isal: not in enabled drivers build config 00:03:24.863 compress/mlx5: not in enabled drivers build config 00:03:24.863 compress/nitrox: not in enabled drivers build config 00:03:24.863 compress/octeontx: not in enabled drivers build config 00:03:24.863 compress/zlib: not in enabled drivers build config 00:03:24.863 regex/*: missing internal dependency, "regexdev" 00:03:24.863 ml/*: missing internal dependency, "mldev" 00:03:24.863 vdpa/ifc: not in enabled drivers build config 00:03:24.863 vdpa/mlx5: not in enabled drivers build config 00:03:24.863 vdpa/nfp: not in enabled drivers build config 00:03:24.863 vdpa/sfc: not in enabled drivers build config 00:03:24.863 event/*: missing internal dependency, "eventdev" 00:03:24.863 baseband/*: missing internal dependency, "bbdev" 00:03:24.863 gpu/*: missing internal dependency, "gpudev" 00:03:24.863 00:03:24.863 00:03:24.863 Build targets in project: 85 00:03:24.863 00:03:24.863 DPDK 24.03.0 00:03:24.863 00:03:24.863 User defined options 00:03:24.863 buildtype : debug 00:03:24.863 default_library : shared 00:03:24.863 libdir : lib 00:03:24.863 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:24.863 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:24.863 c_link_args : 00:03:24.863 cpu_instruction_set: native 00:03:24.863 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:24.863 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:24.863 enable_docs : false 00:03:24.863 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:24.863 enable_kmods : false 00:03:24.863 max_lcores : 128 00:03:24.863 tests : false 00:03:24.863 00:03:24.863 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:24.863 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:24.863 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:24.863 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:24.863 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:24.863 [4/268] Linking static target lib/librte_kvargs.a 00:03:24.864 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:24.864 [6/268] Linking static target lib/librte_log.a 00:03:24.864 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.864 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:24.864 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:24.864 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:24.864 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:24.864 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:24.864 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:24.864 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:24.864 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:24.864 [16/268] Linking static target lib/librte_telemetry.a 00:03:24.864 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:24.864 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.864 [19/268] Linking target lib/librte_log.so.24.1 00:03:24.864 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:24.864 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:24.864 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:25.121 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:25.121 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:25.121 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:25.121 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:25.121 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:25.121 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:25.121 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.121 [30/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:25.380 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:25.380 [32/268] Linking target lib/librte_telemetry.so.24.1 00:03:25.380 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:25.380 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:25.638 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:25.638 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:25.896 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:25.896 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:25.896 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:26.155 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:26.155 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:26.155 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:26.155 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:26.155 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:26.155 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:26.413 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:26.413 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:26.413 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:26.671 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:26.671 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:26.929 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:27.187 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:27.187 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:27.187 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:27.187 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:27.187 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:27.187 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:27.445 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:27.445 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:27.445 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:27.704 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:27.704 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:27.961 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:28.218 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:28.218 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:28.218 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:28.218 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:28.218 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:28.485 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:28.743 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:28.743 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:28.743 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:29.001 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:29.001 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:29.001 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:29.001 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:29.001 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:29.001 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:29.001 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:29.259 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:29.259 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:29.259 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:29.518 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:29.518 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:29.518 [85/268] Linking static target lib/librte_ring.a 00:03:29.776 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:29.776 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:29.776 [88/268] Linking static target lib/librte_rcu.a 00:03:29.776 [89/268] Linking static target lib/librte_eal.a 00:03:29.776 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:29.776 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:30.034 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:30.034 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:30.034 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.034 [95/268] Linking static target lib/librte_mempool.a 00:03:30.034 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:30.292 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:30.292 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.292 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:30.550 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:30.550 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:30.550 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:30.809 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:30.809 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:30.809 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:31.068 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:31.068 [107/268] Linking static target lib/librte_mbuf.a 00:03:31.068 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:31.068 [109/268] Linking static target lib/librte_net.a 00:03:31.333 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.333 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:31.333 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:31.333 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:31.333 [114/268] Linking static target lib/librte_meter.a 00:03:31.604 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.604 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:31.863 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:31.863 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.863 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:32.122 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.122 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:32.380 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:32.638 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:32.638 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:32.896 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:32.896 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:32.896 [127/268] Linking static target lib/librte_pci.a 00:03:32.896 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:32.896 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:32.896 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:32.896 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:32.896 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:33.154 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:33.154 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:33.154 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:33.154 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.154 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:33.154 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:33.412 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:33.413 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:33.413 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:33.413 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:33.413 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:33.413 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:33.413 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:33.670 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:33.929 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:33.929 [148/268] Linking static target lib/librte_ethdev.a 00:03:33.929 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:33.929 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:33.929 [151/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:33.929 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:33.929 [153/268] Linking static target lib/librte_timer.a 00:03:33.929 [154/268] Linking static target lib/librte_cmdline.a 00:03:33.929 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:34.186 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:34.186 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:34.443 [158/268] Linking static target lib/librte_hash.a 00:03:34.700 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:34.700 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.700 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:34.700 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:34.700 [163/268] Linking static target lib/librte_compressdev.a 00:03:34.700 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:34.958 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:35.216 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:35.216 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:35.216 [168/268] Linking static target lib/librte_dmadev.a 00:03:35.473 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:35.473 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:35.473 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:35.473 [172/268] Linking static target lib/librte_cryptodev.a 00:03:35.473 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:35.473 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.473 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:35.473 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.730 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.988 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:35.988 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:35.988 [180/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:35.988 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:35.988 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:35.988 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.302 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:36.302 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:36.302 [186/268] Linking static target lib/librte_power.a 00:03:36.866 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:36.866 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:36.866 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:36.866 [190/268] Linking static target lib/librte_reorder.a 00:03:36.866 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:36.866 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:36.866 [193/268] Linking static target lib/librte_security.a 00:03:37.123 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:37.123 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.380 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.380 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.638 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:37.638 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.638 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:37.638 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:37.895 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:37.895 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:37.895 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:38.154 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:38.154 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:38.154 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:38.488 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:38.488 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:38.488 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:38.488 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:38.488 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:38.488 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:38.488 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:38.488 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:38.488 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:38.746 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:38.746 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:38.746 [219/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:38.746 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:38.746 [221/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:38.746 [222/268] Linking static target drivers/librte_bus_vdev.a 00:03:39.004 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:39.004 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:39.004 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:39.004 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:39.004 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.004 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.936 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:39.936 [230/268] Linking static target lib/librte_vhost.a 00:03:40.219 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.476 [232/268] Linking target lib/librte_eal.so.24.1 00:03:40.734 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:40.734 [234/268] Linking target lib/librte_timer.so.24.1 00:03:40.734 [235/268] Linking target lib/librte_pci.so.24.1 00:03:40.734 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:40.734 [237/268] Linking target lib/librte_ring.so.24.1 00:03:40.734 [238/268] Linking target lib/librte_meter.so.24.1 00:03:40.734 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:40.991 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:40.991 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:40.991 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:40.991 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:40.991 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:40.991 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:40.991 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:40.991 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:41.250 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:41.250 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:41.250 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:41.250 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:41.250 [252/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.508 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.508 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:41.508 [255/268] Linking target lib/librte_net.so.24.1 00:03:41.508 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:41.508 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:41.508 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:41.766 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:41.767 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:41.767 [261/268] Linking target lib/librte_hash.so.24.1 00:03:41.767 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:41.767 [263/268] Linking target lib/librte_security.so.24.1 00:03:41.767 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:41.767 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:42.025 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:42.025 [267/268] Linking target lib/librte_power.so.24.1 00:03:42.025 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:42.025 INFO: autodetecting backend as ninja 00:03:42.025 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:43.398 CC lib/ut/ut.o 00:03:43.398 CC lib/ut_mock/mock.o 00:03:43.398 CC lib/log/log.o 00:03:43.398 CC lib/log/log_flags.o 00:03:43.398 CC lib/log/log_deprecated.o 00:03:43.398 LIB libspdk_ut_mock.a 00:03:43.398 LIB libspdk_ut.a 00:03:43.398 SO libspdk_ut.so.2.0 00:03:43.398 SO libspdk_ut_mock.so.6.0 00:03:43.398 LIB libspdk_log.a 00:03:43.398 SYMLINK libspdk_ut.so 00:03:43.399 SYMLINK libspdk_ut_mock.so 00:03:43.399 SO libspdk_log.so.7.0 00:03:43.399 SYMLINK libspdk_log.so 00:03:43.657 CC lib/dma/dma.o 00:03:43.657 CXX lib/trace_parser/trace.o 00:03:43.657 CC lib/ioat/ioat.o 00:03:43.657 CC lib/util/base64.o 00:03:43.657 CC lib/util/bit_array.o 00:03:43.657 CC lib/util/cpuset.o 00:03:43.657 CC lib/util/crc32.o 00:03:43.657 CC lib/util/crc16.o 00:03:43.657 CC lib/util/crc32c.o 00:03:43.913 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.913 CC lib/util/crc32_ieee.o 00:03:43.913 CC lib/vfio_user/host/vfio_user.o 00:03:43.913 CC lib/util/crc64.o 00:03:43.913 CC lib/util/dif.o 00:03:43.913 LIB libspdk_dma.a 00:03:44.171 CC lib/util/fd.o 00:03:44.171 SO libspdk_dma.so.4.0 00:03:44.171 CC lib/util/file.o 00:03:44.171 CC lib/util/hexlify.o 00:03:44.171 SYMLINK libspdk_dma.so 00:03:44.171 CC lib/util/iov.o 00:03:44.171 CC lib/util/math.o 00:03:44.171 LIB libspdk_ioat.a 00:03:44.171 CC lib/util/pipe.o 00:03:44.171 SO libspdk_ioat.so.7.0 00:03:44.171 CC lib/util/strerror_tls.o 00:03:44.171 LIB libspdk_vfio_user.a 00:03:44.171 SYMLINK libspdk_ioat.so 00:03:44.428 CC lib/util/string.o 00:03:44.428 CC lib/util/uuid.o 00:03:44.428 CC lib/util/fd_group.o 00:03:44.428 CC lib/util/xor.o 00:03:44.428 SO libspdk_vfio_user.so.5.0 00:03:44.428 SYMLINK libspdk_vfio_user.so 00:03:44.428 CC lib/util/zipf.o 00:03:44.687 LIB libspdk_util.a 00:03:44.687 LIB libspdk_trace_parser.a 00:03:44.945 SO libspdk_trace_parser.so.5.0 00:03:44.945 SO libspdk_util.so.9.1 00:03:44.945 SYMLINK libspdk_trace_parser.so 00:03:45.203 SYMLINK libspdk_util.so 00:03:45.203 CC lib/rdma_utils/rdma_utils.o 00:03:45.203 CC lib/idxd/idxd.o 00:03:45.203 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:45.203 CC lib/rdma_provider/common.o 00:03:45.203 CC lib/idxd/idxd_user.o 00:03:45.203 CC lib/conf/conf.o 00:03:45.203 CC lib/idxd/idxd_kernel.o 00:03:45.203 CC lib/json/json_parse.o 00:03:45.203 CC lib/vmd/vmd.o 00:03:45.462 CC lib/env_dpdk/env.o 00:03:45.720 CC lib/env_dpdk/memory.o 00:03:45.720 CC lib/env_dpdk/pci.o 00:03:45.720 LIB libspdk_rdma_provider.a 00:03:45.720 SO libspdk_rdma_provider.so.6.0 00:03:45.720 LIB libspdk_conf.a 00:03:45.720 CC lib/json/json_util.o 00:03:45.720 CC lib/json/json_write.o 00:03:45.720 SO libspdk_conf.so.6.0 00:03:45.720 LIB libspdk_rdma_utils.a 00:03:45.720 SYMLINK libspdk_rdma_provider.so 00:03:45.720 SO libspdk_rdma_utils.so.1.0 00:03:45.720 SYMLINK libspdk_conf.so 00:03:45.978 CC lib/env_dpdk/init.o 00:03:45.978 CC lib/vmd/led.o 00:03:45.978 SYMLINK libspdk_rdma_utils.so 00:03:45.978 CC lib/env_dpdk/threads.o 00:03:45.978 CC lib/env_dpdk/pci_ioat.o 00:03:46.237 CC lib/env_dpdk/pci_virtio.o 00:03:46.237 LIB libspdk_vmd.a 00:03:46.237 CC lib/env_dpdk/pci_vmd.o 00:03:46.237 SO libspdk_vmd.so.6.0 00:03:46.237 LIB libspdk_idxd.a 00:03:46.237 CC lib/env_dpdk/pci_idxd.o 00:03:46.237 LIB libspdk_json.a 00:03:46.237 CC lib/env_dpdk/pci_event.o 00:03:46.237 SO libspdk_json.so.6.0 00:03:46.237 SO libspdk_idxd.so.12.0 00:03:46.237 SYMLINK libspdk_vmd.so 00:03:46.237 CC lib/env_dpdk/sigbus_handler.o 00:03:46.237 CC lib/env_dpdk/pci_dpdk.o 00:03:46.495 SYMLINK libspdk_idxd.so 00:03:46.495 SYMLINK libspdk_json.so 00:03:46.495 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:46.495 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:46.753 CC lib/jsonrpc/jsonrpc_server.o 00:03:46.753 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:46.753 CC lib/jsonrpc/jsonrpc_client.o 00:03:46.753 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:47.011 LIB libspdk_jsonrpc.a 00:03:47.011 SO libspdk_jsonrpc.so.6.0 00:03:47.011 SYMLINK libspdk_jsonrpc.so 00:03:47.270 LIB libspdk_env_dpdk.a 00:03:47.270 CC lib/rpc/rpc.o 00:03:47.528 SO libspdk_env_dpdk.so.14.1 00:03:47.529 LIB libspdk_rpc.a 00:03:47.529 SO libspdk_rpc.so.6.0 00:03:47.529 SYMLINK libspdk_env_dpdk.so 00:03:47.787 SYMLINK libspdk_rpc.so 00:03:47.787 CC lib/notify/notify.o 00:03:47.787 CC lib/notify/notify_rpc.o 00:03:48.046 CC lib/trace/trace.o 00:03:48.046 CC lib/keyring/keyring.o 00:03:48.046 CC lib/trace/trace_rpc.o 00:03:48.046 CC lib/trace/trace_flags.o 00:03:48.046 CC lib/keyring/keyring_rpc.o 00:03:48.046 LIB libspdk_notify.a 00:03:48.303 SO libspdk_notify.so.6.0 00:03:48.303 LIB libspdk_trace.a 00:03:48.303 LIB libspdk_keyring.a 00:03:48.303 SO libspdk_keyring.so.1.0 00:03:48.303 SYMLINK libspdk_notify.so 00:03:48.303 SO libspdk_trace.so.10.0 00:03:48.303 SYMLINK libspdk_keyring.so 00:03:48.303 SYMLINK libspdk_trace.so 00:03:48.560 CC lib/thread/thread.o 00:03:48.560 CC lib/thread/iobuf.o 00:03:48.560 CC lib/sock/sock.o 00:03:48.560 CC lib/sock/sock_rpc.o 00:03:49.125 LIB libspdk_sock.a 00:03:49.125 SO libspdk_sock.so.10.0 00:03:49.125 SYMLINK libspdk_sock.so 00:03:49.382 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:49.383 CC lib/nvme/nvme_ctrlr.o 00:03:49.383 CC lib/nvme/nvme_fabric.o 00:03:49.383 CC lib/nvme/nvme_ns.o 00:03:49.383 CC lib/nvme/nvme_ns_cmd.o 00:03:49.383 CC lib/nvme/nvme_pcie_common.o 00:03:49.383 CC lib/nvme/nvme_pcie.o 00:03:49.383 CC lib/nvme/nvme_qpair.o 00:03:49.383 CC lib/nvme/nvme.o 00:03:50.316 LIB libspdk_thread.a 00:03:50.316 SO libspdk_thread.so.10.1 00:03:50.316 CC lib/nvme/nvme_quirks.o 00:03:50.316 CC lib/nvme/nvme_transport.o 00:03:50.574 SYMLINK libspdk_thread.so 00:03:50.574 CC lib/nvme/nvme_discovery.o 00:03:50.574 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:50.574 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:50.833 CC lib/nvme/nvme_tcp.o 00:03:50.833 CC lib/nvme/nvme_opal.o 00:03:50.833 CC lib/nvme/nvme_io_msg.o 00:03:51.091 CC lib/nvme/nvme_poll_group.o 00:03:51.091 CC lib/nvme/nvme_zns.o 00:03:51.091 CC lib/nvme/nvme_stubs.o 00:03:51.350 CC lib/nvme/nvme_auth.o 00:03:51.350 CC lib/nvme/nvme_cuse.o 00:03:51.350 CC lib/accel/accel.o 00:03:51.607 CC lib/accel/accel_rpc.o 00:03:51.607 CC lib/nvme/nvme_rdma.o 00:03:51.864 CC lib/accel/accel_sw.o 00:03:52.427 CC lib/blob/blobstore.o 00:03:52.427 CC lib/init/json_config.o 00:03:52.427 CC lib/virtio/virtio.o 00:03:52.427 CC lib/virtio/virtio_vhost_user.o 00:03:52.427 CC lib/blob/request.o 00:03:52.427 CC lib/blob/zeroes.o 00:03:52.427 LIB libspdk_accel.a 00:03:52.685 SO libspdk_accel.so.15.1 00:03:52.685 CC lib/init/subsystem.o 00:03:52.685 SYMLINK libspdk_accel.so 00:03:52.685 CC lib/virtio/virtio_vfio_user.o 00:03:52.685 CC lib/init/subsystem_rpc.o 00:03:52.685 CC lib/init/rpc.o 00:03:52.941 CC lib/virtio/virtio_pci.o 00:03:52.941 CC lib/blob/blob_bs_dev.o 00:03:52.941 CC lib/bdev/bdev.o 00:03:52.941 CC lib/bdev/bdev_rpc.o 00:03:52.941 LIB libspdk_init.a 00:03:53.197 SO libspdk_init.so.5.0 00:03:53.197 CC lib/bdev/bdev_zone.o 00:03:53.197 SYMLINK libspdk_init.so 00:03:53.197 CC lib/bdev/part.o 00:03:53.197 CC lib/bdev/scsi_nvme.o 00:03:53.453 LIB libspdk_virtio.a 00:03:53.453 LIB libspdk_nvme.a 00:03:53.453 SO libspdk_virtio.so.7.0 00:03:53.710 CC lib/event/app.o 00:03:53.710 CC lib/event/reactor.o 00:03:53.710 CC lib/event/log_rpc.o 00:03:53.710 CC lib/event/app_rpc.o 00:03:53.710 CC lib/event/scheduler_static.o 00:03:53.710 SYMLINK libspdk_virtio.so 00:03:53.710 SO libspdk_nvme.so.13.1 00:03:54.273 SYMLINK libspdk_nvme.so 00:03:54.273 LIB libspdk_event.a 00:03:54.273 SO libspdk_event.so.14.0 00:03:54.531 SYMLINK libspdk_event.so 00:03:56.432 LIB libspdk_bdev.a 00:03:56.432 LIB libspdk_blob.a 00:03:56.432 SO libspdk_bdev.so.15.1 00:03:56.432 SO libspdk_blob.so.11.0 00:03:56.432 SYMLINK libspdk_bdev.so 00:03:56.689 SYMLINK libspdk_blob.so 00:03:56.689 CC lib/nbd/nbd.o 00:03:56.689 CC lib/scsi/dev.o 00:03:56.689 CC lib/nvmf/ctrlr.o 00:03:56.689 CC lib/nbd/nbd_rpc.o 00:03:56.689 CC lib/nvmf/ctrlr_discovery.o 00:03:56.689 CC lib/scsi/lun.o 00:03:56.689 CC lib/ftl/ftl_core.o 00:03:56.689 CC lib/ublk/ublk.o 00:03:56.689 CC lib/lvol/lvol.o 00:03:56.947 CC lib/blobfs/blobfs.o 00:03:56.947 CC lib/ublk/ublk_rpc.o 00:03:56.947 CC lib/nvmf/ctrlr_bdev.o 00:03:57.205 CC lib/nvmf/subsystem.o 00:03:57.205 CC lib/scsi/port.o 00:03:57.463 CC lib/ftl/ftl_init.o 00:03:57.463 LIB libspdk_nbd.a 00:03:57.463 SO libspdk_nbd.so.7.0 00:03:57.463 CC lib/scsi/scsi.o 00:03:57.721 CC lib/scsi/scsi_bdev.o 00:03:57.721 SYMLINK libspdk_nbd.so 00:03:57.721 CC lib/ftl/ftl_layout.o 00:03:57.721 CC lib/blobfs/tree.o 00:03:58.018 LIB libspdk_ublk.a 00:03:58.018 CC lib/ftl/ftl_debug.o 00:03:58.018 SO libspdk_ublk.so.3.0 00:03:58.018 SYMLINK libspdk_ublk.so 00:03:58.018 CC lib/ftl/ftl_io.o 00:03:58.018 CC lib/ftl/ftl_sb.o 00:03:58.275 CC lib/scsi/scsi_pr.o 00:03:58.275 CC lib/scsi/scsi_rpc.o 00:03:58.275 CC lib/scsi/task.o 00:03:58.275 LIB libspdk_blobfs.a 00:03:58.275 CC lib/ftl/ftl_l2p.o 00:03:58.275 SO libspdk_blobfs.so.10.0 00:03:58.532 SYMLINK libspdk_blobfs.so 00:03:58.532 CC lib/ftl/ftl_l2p_flat.o 00:03:58.532 LIB libspdk_lvol.a 00:03:58.532 SO libspdk_lvol.so.10.0 00:03:58.532 CC lib/nvmf/nvmf.o 00:03:58.532 CC lib/nvmf/nvmf_rpc.o 00:03:58.532 CC lib/ftl/ftl_nv_cache.o 00:03:58.532 SYMLINK libspdk_lvol.so 00:03:58.532 CC lib/ftl/ftl_band.o 00:03:58.532 CC lib/ftl/ftl_band_ops.o 00:03:58.790 CC lib/nvmf/transport.o 00:03:58.790 LIB libspdk_scsi.a 00:03:58.790 CC lib/ftl/ftl_writer.o 00:03:58.790 CC lib/nvmf/tcp.o 00:03:58.790 SO libspdk_scsi.so.9.0 00:03:59.047 SYMLINK libspdk_scsi.so 00:03:59.047 CC lib/nvmf/stubs.o 00:03:59.304 CC lib/nvmf/mdns_server.o 00:03:59.304 CC lib/nvmf/rdma.o 00:03:59.304 CC lib/ftl/ftl_rq.o 00:03:59.561 CC lib/ftl/ftl_reloc.o 00:03:59.819 CC lib/nvmf/auth.o 00:04:00.077 CC lib/ftl/ftl_l2p_cache.o 00:04:00.077 CC lib/iscsi/conn.o 00:04:00.077 CC lib/iscsi/init_grp.o 00:04:00.077 CC lib/iscsi/iscsi.o 00:04:00.077 CC lib/vhost/vhost.o 00:04:00.334 CC lib/vhost/vhost_rpc.o 00:04:00.334 CC lib/iscsi/md5.o 00:04:00.592 CC lib/iscsi/param.o 00:04:00.592 CC lib/ftl/ftl_p2l.o 00:04:00.592 CC lib/iscsi/portal_grp.o 00:04:00.850 CC lib/iscsi/tgt_node.o 00:04:01.108 CC lib/ftl/mngt/ftl_mngt.o 00:04:01.108 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:01.108 CC lib/iscsi/iscsi_subsystem.o 00:04:01.108 CC lib/vhost/vhost_scsi.o 00:04:01.366 CC lib/vhost/vhost_blk.o 00:04:01.366 CC lib/vhost/rte_vhost_user.o 00:04:01.623 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:01.623 CC lib/iscsi/iscsi_rpc.o 00:04:01.623 CC lib/iscsi/task.o 00:04:01.623 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:01.623 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:01.880 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:01.880 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.138 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.138 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:02.138 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:02.397 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:02.397 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:02.397 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:02.397 LIB libspdk_iscsi.a 00:04:02.655 CC lib/ftl/utils/ftl_conf.o 00:04:02.655 SO libspdk_iscsi.so.8.0 00:04:02.655 CC lib/ftl/utils/ftl_md.o 00:04:02.655 CC lib/ftl/utils/ftl_mempool.o 00:04:02.655 CC lib/ftl/utils/ftl_bitmap.o 00:04:02.913 CC lib/ftl/utils/ftl_property.o 00:04:02.913 LIB libspdk_nvmf.a 00:04:02.913 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:02.913 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:02.913 SYMLINK libspdk_iscsi.so 00:04:02.913 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:03.170 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:03.170 SO libspdk_nvmf.so.18.1 00:04:03.170 LIB libspdk_vhost.a 00:04:03.170 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:03.170 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:03.170 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:03.429 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:03.429 SO libspdk_vhost.so.8.0 00:04:03.429 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:03.429 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:03.429 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:03.429 SYMLINK libspdk_nvmf.so 00:04:03.429 CC lib/ftl/base/ftl_base_dev.o 00:04:03.429 CC lib/ftl/base/ftl_base_bdev.o 00:04:03.429 CC lib/ftl/ftl_trace.o 00:04:03.429 SYMLINK libspdk_vhost.so 00:04:03.996 LIB libspdk_ftl.a 00:04:04.253 SO libspdk_ftl.so.9.0 00:04:04.817 SYMLINK libspdk_ftl.so 00:04:05.076 CC module/env_dpdk/env_dpdk_rpc.o 00:04:05.076 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:05.076 CC module/accel/dsa/accel_dsa.o 00:04:05.076 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:05.076 CC module/accel/error/accel_error.o 00:04:05.076 CC module/keyring/file/keyring.o 00:04:05.076 CC module/accel/ioat/accel_ioat.o 00:04:05.076 CC module/blob/bdev/blob_bdev.o 00:04:05.076 CC module/accel/iaa/accel_iaa.o 00:04:05.076 CC module/sock/posix/posix.o 00:04:05.334 LIB libspdk_env_dpdk_rpc.a 00:04:05.334 SO libspdk_env_dpdk_rpc.so.6.0 00:04:05.334 LIB libspdk_scheduler_dynamic.a 00:04:05.334 CC module/accel/ioat/accel_ioat_rpc.o 00:04:05.334 CC module/keyring/file/keyring_rpc.o 00:04:05.334 SO libspdk_scheduler_dynamic.so.4.0 00:04:05.334 SYMLINK libspdk_env_dpdk_rpc.so 00:04:05.334 CC module/accel/error/accel_error_rpc.o 00:04:05.334 LIB libspdk_scheduler_dpdk_governor.a 00:04:05.592 CC module/accel/iaa/accel_iaa_rpc.o 00:04:05.592 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:05.592 SYMLINK libspdk_scheduler_dynamic.so 00:04:05.592 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:05.592 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.592 LIB libspdk_blob_bdev.a 00:04:05.592 SO libspdk_blob_bdev.so.11.0 00:04:05.592 LIB libspdk_accel_ioat.a 00:04:05.592 LIB libspdk_keyring_file.a 00:04:05.592 SO libspdk_accel_ioat.so.6.0 00:04:05.592 SYMLINK libspdk_blob_bdev.so 00:04:05.592 LIB libspdk_accel_iaa.a 00:04:05.592 LIB libspdk_accel_error.a 00:04:05.592 CC module/keyring/linux/keyring.o 00:04:05.592 SO libspdk_keyring_file.so.1.0 00:04:05.850 SO libspdk_accel_iaa.so.3.0 00:04:05.850 SYMLINK libspdk_accel_ioat.so 00:04:05.850 SO libspdk_accel_error.so.2.0 00:04:05.850 CC module/scheduler/gscheduler/gscheduler.o 00:04:05.850 LIB libspdk_accel_dsa.a 00:04:05.850 CC module/keyring/linux/keyring_rpc.o 00:04:05.850 SYMLINK libspdk_keyring_file.so 00:04:05.850 SYMLINK libspdk_accel_iaa.so 00:04:05.850 SO libspdk_accel_dsa.so.5.0 00:04:05.850 SYMLINK libspdk_accel_error.so 00:04:05.850 SYMLINK libspdk_accel_dsa.so 00:04:06.108 LIB libspdk_scheduler_gscheduler.a 00:04:06.108 LIB libspdk_keyring_linux.a 00:04:06.108 CC module/bdev/error/vbdev_error.o 00:04:06.108 SO libspdk_scheduler_gscheduler.so.4.0 00:04:06.108 SO libspdk_keyring_linux.so.1.0 00:04:06.108 CC module/bdev/delay/vbdev_delay.o 00:04:06.108 CC module/bdev/gpt/gpt.o 00:04:06.108 CC module/blobfs/bdev/blobfs_bdev.o 00:04:06.108 LIB libspdk_sock_posix.a 00:04:06.108 CC module/bdev/lvol/vbdev_lvol.o 00:04:06.108 SO libspdk_sock_posix.so.6.0 00:04:06.108 CC module/bdev/malloc/bdev_malloc.o 00:04:06.108 SYMLINK libspdk_scheduler_gscheduler.so 00:04:06.108 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:06.108 SYMLINK libspdk_keyring_linux.so 00:04:06.108 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:06.366 CC module/bdev/null/bdev_null.o 00:04:06.366 SYMLINK libspdk_sock_posix.so 00:04:06.366 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:06.366 CC module/bdev/null/bdev_null_rpc.o 00:04:06.366 CC module/bdev/gpt/vbdev_gpt.o 00:04:06.624 CC module/bdev/error/vbdev_error_rpc.o 00:04:06.624 LIB libspdk_blobfs_bdev.a 00:04:06.624 SO libspdk_blobfs_bdev.so.6.0 00:04:06.882 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:06.882 LIB libspdk_bdev_null.a 00:04:06.882 SYMLINK libspdk_blobfs_bdev.so 00:04:06.882 LIB libspdk_bdev_error.a 00:04:06.882 CC module/bdev/nvme/bdev_nvme.o 00:04:06.882 SO libspdk_bdev_error.so.6.0 00:04:06.882 SO libspdk_bdev_null.so.6.0 00:04:06.882 LIB libspdk_bdev_malloc.a 00:04:06.882 SO libspdk_bdev_malloc.so.6.0 00:04:06.882 LIB libspdk_bdev_gpt.a 00:04:06.882 SYMLINK libspdk_bdev_error.so 00:04:06.882 SYMLINK libspdk_bdev_null.so 00:04:06.882 LIB libspdk_bdev_lvol.a 00:04:07.139 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:07.139 SYMLINK libspdk_bdev_malloc.so 00:04:07.139 CC module/bdev/nvme/nvme_rpc.o 00:04:07.139 CC module/bdev/passthru/vbdev_passthru.o 00:04:07.139 SO libspdk_bdev_gpt.so.6.0 00:04:07.139 LIB libspdk_bdev_delay.a 00:04:07.139 SO libspdk_bdev_lvol.so.6.0 00:04:07.139 CC module/bdev/raid/bdev_raid.o 00:04:07.139 CC module/bdev/split/vbdev_split.o 00:04:07.139 SO libspdk_bdev_delay.so.6.0 00:04:07.139 SYMLINK libspdk_bdev_gpt.so 00:04:07.139 SYMLINK libspdk_bdev_lvol.so 00:04:07.139 SYMLINK libspdk_bdev_delay.so 00:04:07.139 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:07.396 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:07.396 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:07.396 CC module/bdev/split/vbdev_split_rpc.o 00:04:07.396 CC module/bdev/aio/bdev_aio.o 00:04:07.396 CC module/bdev/ftl/bdev_ftl.o 00:04:07.654 LIB libspdk_bdev_passthru.a 00:04:07.654 SO libspdk_bdev_passthru.so.6.0 00:04:07.654 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:07.655 LIB libspdk_bdev_split.a 00:04:07.655 CC module/bdev/iscsi/bdev_iscsi.o 00:04:07.655 SO libspdk_bdev_split.so.6.0 00:04:07.655 SYMLINK libspdk_bdev_passthru.so 00:04:07.913 SYMLINK libspdk_bdev_split.so 00:04:07.913 CC module/bdev/aio/bdev_aio_rpc.o 00:04:07.913 LIB libspdk_bdev_zone_block.a 00:04:07.913 SO libspdk_bdev_zone_block.so.6.0 00:04:07.913 CC module/bdev/nvme/bdev_mdns_client.o 00:04:07.913 CC module/bdev/nvme/vbdev_opal.o 00:04:07.913 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:07.913 LIB libspdk_bdev_ftl.a 00:04:07.913 LIB libspdk_bdev_aio.a 00:04:07.913 SYMLINK libspdk_bdev_zone_block.so 00:04:07.913 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:08.169 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:08.169 SO libspdk_bdev_ftl.so.6.0 00:04:08.170 SO libspdk_bdev_aio.so.6.0 00:04:08.170 SYMLINK libspdk_bdev_ftl.so 00:04:08.170 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:08.170 SYMLINK libspdk_bdev_aio.so 00:04:08.170 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:08.170 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:08.170 CC module/bdev/raid/bdev_raid_rpc.o 00:04:08.427 CC module/bdev/raid/bdev_raid_sb.o 00:04:08.427 LIB libspdk_bdev_iscsi.a 00:04:08.427 SO libspdk_bdev_iscsi.so.6.0 00:04:08.427 CC module/bdev/raid/raid0.o 00:04:08.427 CC module/bdev/raid/raid1.o 00:04:08.427 SYMLINK libspdk_bdev_iscsi.so 00:04:08.427 CC module/bdev/raid/concat.o 00:04:08.766 LIB libspdk_bdev_virtio.a 00:04:08.766 SO libspdk_bdev_virtio.so.6.0 00:04:08.766 SYMLINK libspdk_bdev_virtio.so 00:04:08.766 LIB libspdk_bdev_raid.a 00:04:09.024 SO libspdk_bdev_raid.so.6.0 00:04:09.024 SYMLINK libspdk_bdev_raid.so 00:04:09.957 LIB libspdk_bdev_nvme.a 00:04:09.957 SO libspdk_bdev_nvme.so.7.0 00:04:09.957 SYMLINK libspdk_bdev_nvme.so 00:04:10.522 CC module/event/subsystems/scheduler/scheduler.o 00:04:10.522 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:10.522 CC module/event/subsystems/sock/sock.o 00:04:10.522 CC module/event/subsystems/vmd/vmd.o 00:04:10.522 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:10.522 CC module/event/subsystems/keyring/keyring.o 00:04:10.522 CC module/event/subsystems/iobuf/iobuf.o 00:04:10.522 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:10.780 LIB libspdk_event_scheduler.a 00:04:10.780 LIB libspdk_event_iobuf.a 00:04:10.780 SO libspdk_event_scheduler.so.4.0 00:04:10.780 LIB libspdk_event_vmd.a 00:04:10.780 LIB libspdk_event_keyring.a 00:04:10.781 LIB libspdk_event_vhost_blk.a 00:04:10.781 LIB libspdk_event_sock.a 00:04:10.781 SO libspdk_event_iobuf.so.3.0 00:04:10.781 SYMLINK libspdk_event_scheduler.so 00:04:10.781 SO libspdk_event_vhost_blk.so.3.0 00:04:10.781 SO libspdk_event_keyring.so.1.0 00:04:10.781 SO libspdk_event_sock.so.5.0 00:04:10.781 SO libspdk_event_vmd.so.6.0 00:04:10.781 SYMLINK libspdk_event_keyring.so 00:04:10.781 SYMLINK libspdk_event_sock.so 00:04:10.781 SYMLINK libspdk_event_iobuf.so 00:04:10.781 SYMLINK libspdk_event_vhost_blk.so 00:04:11.038 SYMLINK libspdk_event_vmd.so 00:04:11.038 CC module/event/subsystems/accel/accel.o 00:04:11.295 LIB libspdk_event_accel.a 00:04:11.295 SO libspdk_event_accel.so.6.0 00:04:11.295 SYMLINK libspdk_event_accel.so 00:04:11.552 CC module/event/subsystems/bdev/bdev.o 00:04:11.809 LIB libspdk_event_bdev.a 00:04:12.066 SO libspdk_event_bdev.so.6.0 00:04:12.066 SYMLINK libspdk_event_bdev.so 00:04:12.322 CC module/event/subsystems/scsi/scsi.o 00:04:12.322 CC module/event/subsystems/ublk/ublk.o 00:04:12.322 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:12.322 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:12.322 CC module/event/subsystems/nbd/nbd.o 00:04:12.322 LIB libspdk_event_nbd.a 00:04:12.322 LIB libspdk_event_ublk.a 00:04:12.322 SO libspdk_event_nbd.so.6.0 00:04:12.581 SO libspdk_event_ublk.so.3.0 00:04:12.581 LIB libspdk_event_scsi.a 00:04:12.581 SYMLINK libspdk_event_nbd.so 00:04:12.581 SO libspdk_event_scsi.so.6.0 00:04:12.581 SYMLINK libspdk_event_ublk.so 00:04:12.581 SYMLINK libspdk_event_scsi.so 00:04:12.581 LIB libspdk_event_nvmf.a 00:04:12.581 SO libspdk_event_nvmf.so.6.0 00:04:12.837 SYMLINK libspdk_event_nvmf.so 00:04:12.837 CC module/event/subsystems/iscsi/iscsi.o 00:04:12.837 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:13.094 LIB libspdk_event_iscsi.a 00:04:13.094 LIB libspdk_event_vhost_scsi.a 00:04:13.094 SO libspdk_event_vhost_scsi.so.3.0 00:04:13.094 SO libspdk_event_iscsi.so.6.0 00:04:13.094 SYMLINK libspdk_event_vhost_scsi.so 00:04:13.094 SYMLINK libspdk_event_iscsi.so 00:04:13.350 SO libspdk.so.6.0 00:04:13.350 SYMLINK libspdk.so 00:04:13.608 CC app/trace_record/trace_record.o 00:04:13.608 TEST_HEADER include/spdk/accel.h 00:04:13.608 TEST_HEADER include/spdk/accel_module.h 00:04:13.608 CXX app/trace/trace.o 00:04:13.608 TEST_HEADER include/spdk/assert.h 00:04:13.608 TEST_HEADER include/spdk/barrier.h 00:04:13.608 TEST_HEADER include/spdk/base64.h 00:04:13.608 TEST_HEADER include/spdk/bdev.h 00:04:13.608 TEST_HEADER include/spdk/bdev_module.h 00:04:13.608 TEST_HEADER include/spdk/bdev_zone.h 00:04:13.608 TEST_HEADER include/spdk/bit_array.h 00:04:13.608 TEST_HEADER include/spdk/bit_pool.h 00:04:13.608 TEST_HEADER include/spdk/blob_bdev.h 00:04:13.608 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:13.608 TEST_HEADER include/spdk/blobfs.h 00:04:13.608 TEST_HEADER include/spdk/blob.h 00:04:13.608 TEST_HEADER include/spdk/conf.h 00:04:13.608 TEST_HEADER include/spdk/config.h 00:04:13.608 TEST_HEADER include/spdk/cpuset.h 00:04:13.608 TEST_HEADER include/spdk/crc16.h 00:04:13.608 TEST_HEADER include/spdk/crc32.h 00:04:13.608 TEST_HEADER include/spdk/crc64.h 00:04:13.608 TEST_HEADER include/spdk/dif.h 00:04:13.608 TEST_HEADER include/spdk/dma.h 00:04:13.608 TEST_HEADER include/spdk/endian.h 00:04:13.608 CC app/nvmf_tgt/nvmf_main.o 00:04:13.608 TEST_HEADER include/spdk/env_dpdk.h 00:04:13.608 TEST_HEADER include/spdk/env.h 00:04:13.608 TEST_HEADER include/spdk/event.h 00:04:13.608 TEST_HEADER include/spdk/fd_group.h 00:04:13.608 TEST_HEADER include/spdk/fd.h 00:04:13.608 TEST_HEADER include/spdk/file.h 00:04:13.608 TEST_HEADER include/spdk/ftl.h 00:04:13.608 CC examples/util/zipf/zipf.o 00:04:13.608 TEST_HEADER include/spdk/gpt_spec.h 00:04:13.608 TEST_HEADER include/spdk/hexlify.h 00:04:13.608 TEST_HEADER include/spdk/histogram_data.h 00:04:13.608 TEST_HEADER include/spdk/idxd.h 00:04:13.608 TEST_HEADER include/spdk/idxd_spec.h 00:04:13.608 TEST_HEADER include/spdk/init.h 00:04:13.608 CC test/thread/poller_perf/poller_perf.o 00:04:13.608 TEST_HEADER include/spdk/ioat.h 00:04:13.608 TEST_HEADER include/spdk/ioat_spec.h 00:04:13.608 TEST_HEADER include/spdk/iscsi_spec.h 00:04:13.608 TEST_HEADER include/spdk/json.h 00:04:13.608 TEST_HEADER include/spdk/jsonrpc.h 00:04:13.608 CC examples/ioat/perf/perf.o 00:04:13.608 TEST_HEADER include/spdk/keyring.h 00:04:13.608 TEST_HEADER include/spdk/keyring_module.h 00:04:13.608 TEST_HEADER include/spdk/likely.h 00:04:13.608 TEST_HEADER include/spdk/log.h 00:04:13.608 TEST_HEADER include/spdk/lvol.h 00:04:13.608 TEST_HEADER include/spdk/memory.h 00:04:13.608 TEST_HEADER include/spdk/mmio.h 00:04:13.866 TEST_HEADER include/spdk/nbd.h 00:04:13.866 TEST_HEADER include/spdk/notify.h 00:04:13.866 CC test/dma/test_dma/test_dma.o 00:04:13.866 TEST_HEADER include/spdk/nvme.h 00:04:13.866 TEST_HEADER include/spdk/nvme_intel.h 00:04:13.866 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:13.866 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:13.866 TEST_HEADER include/spdk/nvme_spec.h 00:04:13.866 TEST_HEADER include/spdk/nvme_zns.h 00:04:13.866 CC test/app/bdev_svc/bdev_svc.o 00:04:13.866 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:13.866 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:13.866 TEST_HEADER include/spdk/nvmf.h 00:04:13.866 TEST_HEADER include/spdk/nvmf_spec.h 00:04:13.866 TEST_HEADER include/spdk/nvmf_transport.h 00:04:13.866 TEST_HEADER include/spdk/opal.h 00:04:13.866 TEST_HEADER include/spdk/opal_spec.h 00:04:13.866 TEST_HEADER include/spdk/pci_ids.h 00:04:13.866 TEST_HEADER include/spdk/pipe.h 00:04:13.866 TEST_HEADER include/spdk/queue.h 00:04:13.866 TEST_HEADER include/spdk/reduce.h 00:04:13.866 TEST_HEADER include/spdk/rpc.h 00:04:13.866 TEST_HEADER include/spdk/scheduler.h 00:04:13.866 TEST_HEADER include/spdk/scsi.h 00:04:13.866 TEST_HEADER include/spdk/scsi_spec.h 00:04:13.866 TEST_HEADER include/spdk/sock.h 00:04:13.866 TEST_HEADER include/spdk/stdinc.h 00:04:13.866 TEST_HEADER include/spdk/string.h 00:04:13.866 TEST_HEADER include/spdk/thread.h 00:04:13.866 TEST_HEADER include/spdk/trace.h 00:04:13.866 TEST_HEADER include/spdk/trace_parser.h 00:04:13.866 TEST_HEADER include/spdk/tree.h 00:04:13.866 TEST_HEADER include/spdk/ublk.h 00:04:13.866 TEST_HEADER include/spdk/util.h 00:04:13.866 TEST_HEADER include/spdk/uuid.h 00:04:13.866 LINK spdk_trace_record 00:04:13.866 TEST_HEADER include/spdk/version.h 00:04:13.866 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:13.866 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:13.866 TEST_HEADER include/spdk/vhost.h 00:04:13.866 TEST_HEADER include/spdk/vmd.h 00:04:13.866 CC test/env/mem_callbacks/mem_callbacks.o 00:04:13.866 TEST_HEADER include/spdk/xor.h 00:04:13.866 TEST_HEADER include/spdk/zipf.h 00:04:13.866 CXX test/cpp_headers/accel.o 00:04:13.866 LINK nvmf_tgt 00:04:13.866 LINK zipf 00:04:14.137 LINK poller_perf 00:04:14.137 LINK ioat_perf 00:04:14.137 LINK bdev_svc 00:04:14.137 LINK spdk_trace 00:04:14.137 CC test/env/vtophys/vtophys.o 00:04:14.137 CXX test/cpp_headers/accel_module.o 00:04:14.137 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.394 CXX test/cpp_headers/assert.o 00:04:14.394 LINK test_dma 00:04:14.394 CC test/env/memory/memory_ut.o 00:04:14.394 CC examples/ioat/verify/verify.o 00:04:14.394 LINK vtophys 00:04:14.652 LINK env_dpdk_post_init 00:04:14.652 CXX test/cpp_headers/barrier.o 00:04:14.652 CC test/app/histogram_perf/histogram_perf.o 00:04:14.652 CC app/iscsi_tgt/iscsi_tgt.o 00:04:14.652 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.910 CXX test/cpp_headers/base64.o 00:04:14.910 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:14.910 LINK verify 00:04:14.910 LINK iscsi_tgt 00:04:14.910 LINK histogram_perf 00:04:14.910 CC test/app/jsoncat/jsoncat.o 00:04:14.910 LINK mem_callbacks 00:04:15.197 CC test/env/pci/pci_ut.o 00:04:15.197 CXX test/cpp_headers/bdev.o 00:04:15.197 LINK jsoncat 00:04:15.197 CXX test/cpp_headers/bdev_module.o 00:04:15.197 CC test/app/stub/stub.o 00:04:15.455 CXX test/cpp_headers/bdev_zone.o 00:04:15.455 CC app/spdk_tgt/spdk_tgt.o 00:04:15.456 LINK nvme_fuzz 00:04:15.456 LINK stub 00:04:15.456 LINK pci_ut 00:04:15.456 CXX test/cpp_headers/bit_array.o 00:04:15.456 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:15.713 LINK memory_ut 00:04:15.713 CXX test/cpp_headers/bit_pool.o 00:04:15.713 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:15.713 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:15.713 LINK spdk_tgt 00:04:15.970 CC test/rpc_client/rpc_client_test.o 00:04:15.970 CXX test/cpp_headers/blob_bdev.o 00:04:15.970 LINK interrupt_tgt 00:04:16.227 CC test/accel/dif/dif.o 00:04:16.227 CC test/blobfs/mkfs/mkfs.o 00:04:16.227 LINK vhost_fuzz 00:04:16.227 LINK rpc_client_test 00:04:16.227 CC test/event/event_perf/event_perf.o 00:04:16.227 CXX test/cpp_headers/blobfs_bdev.o 00:04:16.227 CC app/spdk_lspci/spdk_lspci.o 00:04:16.485 LINK mkfs 00:04:16.485 CC app/spdk_nvme_perf/perf.o 00:04:16.485 CXX test/cpp_headers/blobfs.o 00:04:16.485 LINK event_perf 00:04:16.485 LINK spdk_lspci 00:04:16.747 CC test/event/reactor/reactor.o 00:04:16.747 CC app/spdk_nvme_identify/identify.o 00:04:16.747 CXX test/cpp_headers/blob.o 00:04:16.747 LINK reactor 00:04:16.747 LINK dif 00:04:17.005 CXX test/cpp_headers/conf.o 00:04:17.262 CC test/nvme/aer/aer.o 00:04:17.262 CC test/event/reactor_perf/reactor_perf.o 00:04:17.262 CC examples/thread/thread/thread_ex.o 00:04:17.521 CC test/lvol/esnap/esnap.o 00:04:17.521 CXX test/cpp_headers/config.o 00:04:17.521 CXX test/cpp_headers/cpuset.o 00:04:17.521 LINK spdk_nvme_perf 00:04:17.521 CC test/event/app_repeat/app_repeat.o 00:04:17.521 LINK reactor_perf 00:04:17.521 LINK iscsi_fuzz 00:04:17.779 LINK aer 00:04:17.779 CXX test/cpp_headers/crc16.o 00:04:17.779 LINK thread 00:04:17.779 LINK app_repeat 00:04:18.038 CXX test/cpp_headers/crc32.o 00:04:18.038 CC examples/sock/hello_world/hello_sock.o 00:04:18.307 CC examples/vmd/lsvmd/lsvmd.o 00:04:18.308 CC test/nvme/reset/reset.o 00:04:18.308 CXX test/cpp_headers/crc64.o 00:04:18.308 CC examples/idxd/perf/perf.o 00:04:18.308 CC test/event/scheduler/scheduler.o 00:04:18.308 LINK spdk_nvme_identify 00:04:18.569 LINK lsvmd 00:04:18.569 CC examples/accel/perf/accel_perf.o 00:04:18.569 LINK hello_sock 00:04:18.569 LINK reset 00:04:18.569 CXX test/cpp_headers/dif.o 00:04:18.826 CXX test/cpp_headers/dma.o 00:04:18.826 LINK scheduler 00:04:18.826 CC app/spdk_nvme_discover/discovery_aer.o 00:04:18.826 CC app/spdk_top/spdk_top.o 00:04:18.826 LINK idxd_perf 00:04:18.826 CC examples/vmd/led/led.o 00:04:18.826 CXX test/cpp_headers/endian.o 00:04:19.083 CC test/nvme/sgl/sgl.o 00:04:19.083 CXX test/cpp_headers/env_dpdk.o 00:04:19.083 CXX test/cpp_headers/env.o 00:04:19.083 LINK spdk_nvme_discover 00:04:19.083 LINK accel_perf 00:04:19.083 LINK led 00:04:19.340 CXX test/cpp_headers/event.o 00:04:19.340 CXX test/cpp_headers/fd_group.o 00:04:19.340 LINK sgl 00:04:19.597 CXX test/cpp_headers/fd.o 00:04:19.597 CC examples/nvme/hello_world/hello_world.o 00:04:19.597 CC examples/blob/cli/blobcli.o 00:04:19.597 CC examples/blob/hello_world/hello_blob.o 00:04:19.597 CXX test/cpp_headers/file.o 00:04:19.854 CC test/nvme/e2edp/nvme_dp.o 00:04:19.854 CC examples/nvme/reconnect/reconnect.o 00:04:19.854 CC examples/bdev/hello_world/hello_bdev.o 00:04:19.854 CXX test/cpp_headers/ftl.o 00:04:20.111 LINK hello_world 00:04:20.111 LINK hello_blob 00:04:20.111 LINK nvme_dp 00:04:20.111 LINK hello_bdev 00:04:20.111 CXX test/cpp_headers/gpt_spec.o 00:04:20.368 LINK reconnect 00:04:20.368 LINK spdk_top 00:04:20.368 CXX test/cpp_headers/hexlify.o 00:04:20.368 CC app/vhost/vhost.o 00:04:20.368 CC test/bdev/bdevio/bdevio.o 00:04:20.626 LINK blobcli 00:04:20.626 CC test/nvme/overhead/overhead.o 00:04:20.626 CC examples/bdev/bdevperf/bdevperf.o 00:04:20.626 CXX test/cpp_headers/histogram_data.o 00:04:20.626 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:20.884 CC app/spdk_dd/spdk_dd.o 00:04:20.884 LINK vhost 00:04:20.884 CXX test/cpp_headers/idxd.o 00:04:21.142 LINK overhead 00:04:21.142 LINK bdevio 00:04:21.142 CC app/fio/nvme/fio_plugin.o 00:04:21.142 CXX test/cpp_headers/idxd_spec.o 00:04:21.400 CXX test/cpp_headers/init.o 00:04:21.400 LINK nvme_manage 00:04:21.400 CC test/nvme/err_injection/err_injection.o 00:04:21.658 LINK spdk_dd 00:04:21.658 CC test/nvme/startup/startup.o 00:04:21.658 CC test/nvme/reserve/reserve.o 00:04:21.658 CXX test/cpp_headers/ioat.o 00:04:21.658 LINK bdevperf 00:04:21.658 LINK err_injection 00:04:21.969 LINK startup 00:04:21.969 CC examples/nvme/arbitration/arbitration.o 00:04:21.969 LINK reserve 00:04:21.969 CXX test/cpp_headers/ioat_spec.o 00:04:21.969 CC app/fio/bdev/fio_plugin.o 00:04:21.969 LINK spdk_nvme 00:04:21.969 CXX test/cpp_headers/iscsi_spec.o 00:04:22.228 CC test/nvme/simple_copy/simple_copy.o 00:04:22.228 CXX test/cpp_headers/json.o 00:04:22.228 CC test/nvme/connect_stress/connect_stress.o 00:04:22.228 CC test/nvme/boot_partition/boot_partition.o 00:04:22.228 CC test/nvme/compliance/nvme_compliance.o 00:04:22.228 LINK arbitration 00:04:22.228 CXX test/cpp_headers/jsonrpc.o 00:04:22.486 LINK simple_copy 00:04:22.486 CC test/nvme/fused_ordering/fused_ordering.o 00:04:22.486 LINK connect_stress 00:04:22.486 LINK boot_partition 00:04:22.486 CXX test/cpp_headers/keyring.o 00:04:22.486 LINK nvme_compliance 00:04:22.486 CC examples/nvme/hotplug/hotplug.o 00:04:22.744 CXX test/cpp_headers/keyring_module.o 00:04:22.744 LINK fused_ordering 00:04:22.744 LINK spdk_bdev 00:04:22.744 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:22.744 CC test/nvme/fdp/fdp.o 00:04:22.744 CXX test/cpp_headers/likely.o 00:04:22.744 CXX test/cpp_headers/log.o 00:04:22.744 CC test/nvme/cuse/cuse.o 00:04:22.745 CXX test/cpp_headers/lvol.o 00:04:23.002 LINK hotplug 00:04:23.002 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:23.002 LINK doorbell_aers 00:04:23.002 CXX test/cpp_headers/memory.o 00:04:23.002 CC examples/nvme/abort/abort.o 00:04:23.002 CXX test/cpp_headers/mmio.o 00:04:23.002 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:23.002 LINK fdp 00:04:23.260 LINK cmb_copy 00:04:23.260 CXX test/cpp_headers/nbd.o 00:04:23.260 CXX test/cpp_headers/notify.o 00:04:23.260 CXX test/cpp_headers/nvme.o 00:04:23.260 CXX test/cpp_headers/nvme_intel.o 00:04:23.260 CXX test/cpp_headers/nvme_ocssd.o 00:04:23.260 LINK pmr_persistence 00:04:23.260 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:23.260 CXX test/cpp_headers/nvme_spec.o 00:04:23.519 CXX test/cpp_headers/nvme_zns.o 00:04:23.519 LINK abort 00:04:23.519 CXX test/cpp_headers/nvmf_cmd.o 00:04:23.519 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:23.519 CXX test/cpp_headers/nvmf.o 00:04:23.519 CXX test/cpp_headers/nvmf_spec.o 00:04:23.519 CXX test/cpp_headers/nvmf_transport.o 00:04:23.519 CXX test/cpp_headers/opal.o 00:04:23.519 CXX test/cpp_headers/opal_spec.o 00:04:23.777 CXX test/cpp_headers/pci_ids.o 00:04:23.777 CXX test/cpp_headers/pipe.o 00:04:23.777 CXX test/cpp_headers/queue.o 00:04:23.777 CXX test/cpp_headers/reduce.o 00:04:23.777 CXX test/cpp_headers/rpc.o 00:04:23.777 CXX test/cpp_headers/scheduler.o 00:04:23.777 CXX test/cpp_headers/scsi.o 00:04:24.035 CXX test/cpp_headers/scsi_spec.o 00:04:24.035 CXX test/cpp_headers/sock.o 00:04:24.035 CXX test/cpp_headers/stdinc.o 00:04:24.035 CC examples/nvmf/nvmf/nvmf.o 00:04:24.035 CXX test/cpp_headers/string.o 00:04:24.035 CXX test/cpp_headers/thread.o 00:04:24.035 CXX test/cpp_headers/trace.o 00:04:24.035 CXX test/cpp_headers/trace_parser.o 00:04:24.293 CXX test/cpp_headers/tree.o 00:04:24.293 CXX test/cpp_headers/ublk.o 00:04:24.293 CXX test/cpp_headers/util.o 00:04:24.293 CXX test/cpp_headers/uuid.o 00:04:24.293 CXX test/cpp_headers/version.o 00:04:24.293 CXX test/cpp_headers/vfio_user_pci.o 00:04:24.293 CXX test/cpp_headers/vfio_user_spec.o 00:04:24.293 LINK cuse 00:04:24.293 CXX test/cpp_headers/vhost.o 00:04:24.293 CXX test/cpp_headers/vmd.o 00:04:24.293 LINK nvmf 00:04:24.293 CXX test/cpp_headers/xor.o 00:04:24.551 CXX test/cpp_headers/zipf.o 00:04:24.551 LINK esnap 00:04:24.810 00:04:24.810 real 1m16.755s 00:04:24.810 user 8m21.192s 00:04:24.810 sys 1m47.593s 00:04:24.810 15:27:19 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:24.810 15:27:19 make -- common/autotest_common.sh@10 -- $ set +x 00:04:24.810 ************************************ 00:04:24.810 END TEST make 00:04:24.810 ************************************ 00:04:25.068 15:27:19 -- common/autotest_common.sh@1142 -- $ return 0 00:04:25.068 15:27:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:25.068 15:27:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:25.068 15:27:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:25.068 15:27:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.068 15:27:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:25.068 15:27:19 -- pm/common@44 -- $ pid=5178 00:04:25.068 15:27:19 -- pm/common@50 -- $ kill -TERM 5178 00:04:25.068 15:27:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.068 15:27:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:25.068 15:27:19 -- pm/common@44 -- $ pid=5180 00:04:25.068 15:27:19 -- pm/common@50 -- $ kill -TERM 5180 00:04:25.069 15:27:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:25.069 15:27:20 -- nvmf/common.sh@7 -- # uname -s 00:04:25.069 15:27:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.069 15:27:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.069 15:27:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.069 15:27:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.069 15:27:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.069 15:27:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.069 15:27:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.069 15:27:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.069 15:27:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.069 15:27:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.069 15:27:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:04:25.069 15:27:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:04:25.069 15:27:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.069 15:27:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.069 15:27:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:25.069 15:27:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:25.069 15:27:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:25.069 15:27:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.069 15:27:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.069 15:27:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.069 15:27:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.069 15:27:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.069 15:27:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.069 15:27:20 -- paths/export.sh@5 -- # export PATH 00:04:25.069 15:27:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.069 15:27:20 -- nvmf/common.sh@47 -- # : 0 00:04:25.069 15:27:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:25.069 15:27:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:25.069 15:27:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:25.069 15:27:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.069 15:27:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.069 15:27:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:25.069 15:27:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:25.069 15:27:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:25.069 15:27:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:25.069 15:27:20 -- spdk/autotest.sh@32 -- # uname -s 00:04:25.069 15:27:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:25.069 15:27:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:25.069 15:27:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:25.069 15:27:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:25.069 15:27:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:25.069 15:27:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:25.069 15:27:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:25.069 15:27:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:25.069 15:27:20 -- spdk/autotest.sh@48 -- # udevadm_pid=54638 00:04:25.069 15:27:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:25.069 15:27:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:25.069 15:27:20 -- pm/common@17 -- # local monitor 00:04:25.069 15:27:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.069 15:27:20 -- pm/common@21 -- # date +%s 00:04:25.069 15:27:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.069 15:27:20 -- pm/common@21 -- # date +%s 00:04:25.069 15:27:20 -- pm/common@25 -- # sleep 1 00:04:25.069 15:27:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721057240 00:04:25.069 15:27:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721057240 00:04:25.069 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721057240_collect-cpu-load.pm.log 00:04:25.069 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721057240_collect-vmstat.pm.log 00:04:26.005 15:27:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:26.005 15:27:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:26.005 15:27:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.005 15:27:21 -- common/autotest_common.sh@10 -- # set +x 00:04:26.005 15:27:21 -- spdk/autotest.sh@59 -- # create_test_list 00:04:26.005 15:27:21 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:26.005 15:27:21 -- common/autotest_common.sh@10 -- # set +x 00:04:26.263 15:27:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:26.263 15:27:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:26.263 15:27:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:26.263 15:27:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:26.263 15:27:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:26.263 15:27:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:26.263 15:27:21 -- common/autotest_common.sh@1455 -- # uname 00:04:26.263 15:27:21 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:26.263 15:27:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:26.263 15:27:21 -- common/autotest_common.sh@1475 -- # uname 00:04:26.263 15:27:21 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:26.263 15:27:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:26.263 15:27:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:26.263 15:27:21 -- spdk/autotest.sh@72 -- # hash lcov 00:04:26.263 15:27:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:26.263 15:27:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:26.263 --rc lcov_branch_coverage=1 00:04:26.263 --rc lcov_function_coverage=1 00:04:26.263 --rc genhtml_branch_coverage=1 00:04:26.263 --rc genhtml_function_coverage=1 00:04:26.263 --rc genhtml_legend=1 00:04:26.263 --rc geninfo_all_blocks=1 00:04:26.263 ' 00:04:26.263 15:27:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:26.263 --rc lcov_branch_coverage=1 00:04:26.263 --rc lcov_function_coverage=1 00:04:26.263 --rc genhtml_branch_coverage=1 00:04:26.263 --rc genhtml_function_coverage=1 00:04:26.263 --rc genhtml_legend=1 00:04:26.263 --rc geninfo_all_blocks=1 00:04:26.263 ' 00:04:26.263 15:27:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:26.263 --rc lcov_branch_coverage=1 00:04:26.263 --rc lcov_function_coverage=1 00:04:26.263 --rc genhtml_branch_coverage=1 00:04:26.263 --rc genhtml_function_coverage=1 00:04:26.263 --rc genhtml_legend=1 00:04:26.263 --rc geninfo_all_blocks=1 00:04:26.263 --no-external' 00:04:26.263 15:27:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:26.263 --rc lcov_branch_coverage=1 00:04:26.263 --rc lcov_function_coverage=1 00:04:26.263 --rc genhtml_branch_coverage=1 00:04:26.263 --rc genhtml_function_coverage=1 00:04:26.263 --rc genhtml_legend=1 00:04:26.263 --rc geninfo_all_blocks=1 00:04:26.263 --no-external' 00:04:26.263 15:27:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:26.263 lcov: LCOV version 1.14 00:04:26.263 15:27:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:44.349 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:44.349 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:56.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:56.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:56.620 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:56.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:56.620 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:56.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:56.620 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:56.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:56.620 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:56.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:56.620 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:56.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:56.620 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:56.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:56.620 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:56.620 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:56.877 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:56.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:56.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:56.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:57.136 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:57.136 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:57.136 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:57.136 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:57.136 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:57.136 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:57.136 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:57.136 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:57.136 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:57.136 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:57.136 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:57.136 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:01.319 15:27:55 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:01.319 15:27:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.319 15:27:55 -- common/autotest_common.sh@10 -- # set +x 00:05:01.319 15:27:55 -- spdk/autotest.sh@91 -- # rm -f 00:05:01.319 15:27:55 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.577 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:01.577 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:01.577 15:27:56 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:01.577 15:27:56 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:01.577 15:27:56 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:01.577 15:27:56 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:01.577 15:27:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:01.577 15:27:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:01.577 15:27:56 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:01.577 15:27:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.577 15:27:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:01.577 15:27:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:01.577 15:27:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:01.577 15:27:56 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:01.577 15:27:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:01.577 15:27:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:01.577 15:27:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:01.577 15:27:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:01.577 15:27:56 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:01.577 15:27:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:01.577 15:27:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:01.577 15:27:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:01.577 15:27:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:01.577 15:27:56 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:01.577 15:27:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:01.577 15:27:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:01.577 15:27:56 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:01.577 15:27:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.577 15:27:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.577 15:27:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:01.577 15:27:56 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:01.577 15:27:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:01.577 No valid GPT data, bailing 00:05:01.577 15:27:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.577 15:27:56 -- scripts/common.sh@391 -- # pt= 00:05:01.577 15:27:56 -- scripts/common.sh@392 -- # return 1 00:05:01.577 15:27:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:01.577 1+0 records in 00:05:01.577 1+0 records out 00:05:01.577 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00348443 s, 301 MB/s 00:05:01.577 15:27:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.577 15:27:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.577 15:27:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:01.577 15:27:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:01.577 15:27:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:01.577 No valid GPT data, bailing 00:05:01.835 15:27:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:01.835 15:27:56 -- scripts/common.sh@391 -- # pt= 00:05:01.835 15:27:56 -- scripts/common.sh@392 -- # return 1 00:05:01.835 15:27:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:01.835 1+0 records in 00:05:01.835 1+0 records out 00:05:01.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00343978 s, 305 MB/s 00:05:01.835 15:27:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.835 15:27:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.835 15:27:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:01.835 15:27:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:01.835 15:27:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:01.835 No valid GPT data, bailing 00:05:01.835 15:27:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:01.835 15:27:56 -- scripts/common.sh@391 -- # pt= 00:05:01.835 15:27:56 -- scripts/common.sh@392 -- # return 1 00:05:01.835 15:27:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:01.835 1+0 records in 00:05:01.835 1+0 records out 00:05:01.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00339052 s, 309 MB/s 00:05:01.835 15:27:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.835 15:27:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.835 15:27:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:01.835 15:27:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:01.835 15:27:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:01.835 No valid GPT data, bailing 00:05:01.835 15:27:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:01.835 15:27:56 -- scripts/common.sh@391 -- # pt= 00:05:01.835 15:27:56 -- scripts/common.sh@392 -- # return 1 00:05:01.835 15:27:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:01.835 1+0 records in 00:05:01.835 1+0 records out 00:05:01.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465358 s, 225 MB/s 00:05:01.835 15:27:56 -- spdk/autotest.sh@118 -- # sync 00:05:01.835 15:27:56 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:01.835 15:27:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:01.835 15:27:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:03.733 15:27:58 -- spdk/autotest.sh@124 -- # uname -s 00:05:03.733 15:27:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:03.733 15:27:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:03.733 15:27:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.733 15:27:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.733 15:27:58 -- common/autotest_common.sh@10 -- # set +x 00:05:03.734 ************************************ 00:05:03.734 START TEST setup.sh 00:05:03.734 ************************************ 00:05:03.734 15:27:58 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:03.734 * Looking for test storage... 00:05:03.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:03.734 15:27:58 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:03.734 15:27:58 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:03.734 15:27:58 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:03.734 15:27:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.734 15:27:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.734 15:27:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:03.734 ************************************ 00:05:03.734 START TEST acl 00:05:03.734 ************************************ 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:03.734 * Looking for test storage... 00:05:03.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:03.734 15:27:58 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:03.734 15:27:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.734 15:27:58 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:03.734 15:27:58 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:03.734 15:27:58 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:03.734 15:27:58 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:03.734 15:27:58 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:03.734 15:27:58 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.734 15:27:58 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.300 15:27:59 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:04.300 15:27:59 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:04.300 15:27:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.300 15:27:59 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:04.300 15:27:59 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.300 15:27:59 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.865 Hugepages 00:05:04.865 node hugesize free / total 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.865 00:05:04.865 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:04.865 15:27:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.168 15:28:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:05.168 15:28:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:05.168 15:28:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:05.168 15:28:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:05.168 15:28:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:05.168 15:28:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.168 15:28:00 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:05.168 15:28:00 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:05.168 15:28:00 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.168 15:28:00 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.168 15:28:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:05.168 ************************************ 00:05:05.168 START TEST denied 00:05:05.168 ************************************ 00:05:05.168 15:28:00 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:05.168 15:28:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:05.168 15:28:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:05.168 15:28:00 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:05.168 15:28:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.168 15:28:00 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:05.732 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.732 15:28:00 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.296 00:05:06.296 real 0m1.302s 00:05:06.296 user 0m0.525s 00:05:06.296 sys 0m0.735s 00:05:06.296 15:28:01 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.296 15:28:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:06.296 ************************************ 00:05:06.296 END TEST denied 00:05:06.296 ************************************ 00:05:06.296 15:28:01 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:06.296 15:28:01 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:06.296 15:28:01 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.296 15:28:01 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.296 15:28:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:06.296 ************************************ 00:05:06.296 START TEST allowed 00:05:06.296 ************************************ 00:05:06.296 15:28:01 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:06.296 15:28:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:06.296 15:28:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:06.296 15:28:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.296 15:28:01 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:06.296 15:28:01 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:07.231 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.231 15:28:02 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.810 00:05:07.810 real 0m1.503s 00:05:07.810 user 0m0.680s 00:05:07.810 sys 0m0.816s 00:05:07.810 15:28:02 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.810 15:28:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:07.810 ************************************ 00:05:07.810 END TEST allowed 00:05:07.810 ************************************ 00:05:07.810 15:28:02 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:07.810 00:05:07.810 real 0m4.420s 00:05:07.810 user 0m1.988s 00:05:07.810 sys 0m2.395s 00:05:07.810 15:28:02 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.810 15:28:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:07.810 ************************************ 00:05:07.810 END TEST acl 00:05:07.810 ************************************ 00:05:08.107 15:28:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:08.107 15:28:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:08.107 15:28:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.107 15:28:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.107 15:28:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.107 ************************************ 00:05:08.107 START TEST hugepages 00:05:08.107 ************************************ 00:05:08.107 15:28:02 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:08.107 * Looking for test storage... 00:05:08.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 5874352 kB' 'MemAvailable: 7383576 kB' 'Buffers: 3388 kB' 'Cached: 1719948 kB' 'SwapCached: 0 kB' 'Active: 477444 kB' 'Inactive: 1350648 kB' 'Active(anon): 115244 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106132 kB' 'Mapped: 48704 kB' 'Shmem: 10488 kB' 'KReclaimable: 67080 kB' 'Slab: 139988 kB' 'SReclaimable: 67080 kB' 'SUnreclaim: 72908 kB' 'KernelStack: 6316 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 338780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.107 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.108 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:08.109 15:28:03 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:08.109 15:28:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.109 15:28:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.109 15:28:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:08.109 ************************************ 00:05:08.109 START TEST default_setup 00:05:08.109 ************************************ 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.109 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.676 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.676 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7957568 kB' 'MemAvailable: 9466636 kB' 'Buffers: 3388 kB' 'Cached: 1719936 kB' 'SwapCached: 0 kB' 'Active: 494768 kB' 'Inactive: 1350648 kB' 'Active(anon): 132568 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 100 kB' 'Writeback: 0 kB' 'AnonPages: 123684 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139664 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72900 kB' 'KernelStack: 6272 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7957320 kB' 'MemAvailable: 9466388 kB' 'Buffers: 3388 kB' 'Cached: 1719936 kB' 'SwapCached: 0 kB' 'Active: 494380 kB' 'Inactive: 1350648 kB' 'Active(anon): 132180 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'AnonPages: 123396 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139664 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72900 kB' 'KernelStack: 6272 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.943 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7957068 kB' 'MemAvailable: 9466144 kB' 'Buffers: 3388 kB' 'Cached: 1719932 kB' 'SwapCached: 0 kB' 'Active: 494476 kB' 'Inactive: 1350656 kB' 'Active(anon): 132276 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350656 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'AnonPages: 123460 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139652 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72888 kB' 'KernelStack: 6240 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.944 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.945 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:08.946 nr_hugepages=1024 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:08.946 resv_hugepages=0 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.946 surplus_hugepages=0 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.946 anon_hugepages=0 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7957068 kB' 'MemAvailable: 9466148 kB' 'Buffers: 3388 kB' 'Cached: 1719936 kB' 'SwapCached: 0 kB' 'Active: 494296 kB' 'Inactive: 1350660 kB' 'Active(anon): 132096 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350660 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'AnonPages: 123236 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139652 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72888 kB' 'KernelStack: 6256 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.946 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:08.947 15:28:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.947 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7957068 kB' 'MemUsed: 4284900 kB' 'SwapCached: 0 kB' 'Active: 494292 kB' 'Inactive: 1350660 kB' 'Active(anon): 132092 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350660 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'FilePages: 1723324 kB' 'Mapped: 48720 kB' 'AnonPages: 123236 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66764 kB' 'Slab: 139652 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.948 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:08.949 node0=1024 expecting 1024 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:08.949 00:05:08.949 real 0m0.955s 00:05:08.949 user 0m0.472s 00:05:08.949 sys 0m0.431s 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.949 15:28:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:08.949 ************************************ 00:05:08.949 END TEST default_setup 00:05:08.949 ************************************ 00:05:09.209 15:28:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:09.209 15:28:04 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:09.209 15:28:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.209 15:28:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.209 15:28:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.209 ************************************ 00:05:09.209 START TEST per_node_1G_alloc 00:05:09.209 ************************************ 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.209 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.473 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.473 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.473 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:09.473 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:09.473 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.473 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.473 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.473 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.473 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9003996 kB' 'MemAvailable: 10513080 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494656 kB' 'Inactive: 1350664 kB' 'Active(anon): 132456 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123552 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139736 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72972 kB' 'KernelStack: 6308 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.474 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9004776 kB' 'MemAvailable: 10513860 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494620 kB' 'Inactive: 1350664 kB' 'Active(anon): 132420 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123552 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139716 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72952 kB' 'KernelStack: 6288 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.475 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.476 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9004896 kB' 'MemAvailable: 10513980 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494384 kB' 'Inactive: 1350664 kB' 'Active(anon): 132184 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123320 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139708 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72944 kB' 'KernelStack: 6256 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.477 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.478 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.479 nr_hugepages=512 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:09.479 resv_hugepages=0 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.479 surplus_hugepages=0 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.479 anon_hugepages=0 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9004896 kB' 'MemAvailable: 10513980 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494368 kB' 'Inactive: 1350664 kB' 'Active(anon): 132168 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123300 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139704 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72940 kB' 'KernelStack: 6240 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.479 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.480 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9004896 kB' 'MemUsed: 3237072 kB' 'SwapCached: 0 kB' 'Active: 494332 kB' 'Inactive: 1350664 kB' 'Active(anon): 132132 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1723328 kB' 'Mapped: 48720 kB' 'AnonPages: 123268 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66764 kB' 'Slab: 139704 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.481 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.482 node0=512 expecting 512 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:09.482 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:09.482 00:05:09.482 real 0m0.522s 00:05:09.482 user 0m0.274s 00:05:09.482 sys 0m0.278s 00:05:09.483 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.483 15:28:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.483 ************************************ 00:05:09.483 END TEST per_node_1G_alloc 00:05:09.483 ************************************ 00:05:09.742 15:28:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:09.742 15:28:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:09.742 15:28:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.742 15:28:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.742 15:28:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.742 ************************************ 00:05:09.742 START TEST even_2G_alloc 00:05:09.742 ************************************ 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.742 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.004 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.004 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.004 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.004 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:10.004 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.004 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.004 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7956328 kB' 'MemAvailable: 9465412 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494968 kB' 'Inactive: 1350664 kB' 'Active(anon): 132768 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123868 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139752 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72988 kB' 'KernelStack: 6292 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.005 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7956328 kB' 'MemAvailable: 9465412 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494604 kB' 'Inactive: 1350664 kB' 'Active(anon): 132404 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123540 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139772 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 73008 kB' 'KernelStack: 6320 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7956076 kB' 'MemAvailable: 9465160 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494384 kB' 'Inactive: 1350664 kB' 'Active(anon): 132184 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123292 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139768 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 73004 kB' 'KernelStack: 6256 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.008 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.009 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.010 nr_hugepages=1024 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.010 resv_hugepages=0 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.010 surplus_hugepages=0 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.010 anon_hugepages=0 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7956076 kB' 'MemAvailable: 9465160 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494144 kB' 'Inactive: 1350664 kB' 'Active(anon): 131944 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123088 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139760 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6256 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.010 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.011 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7956076 kB' 'MemUsed: 4285892 kB' 'SwapCached: 0 kB' 'Active: 494372 kB' 'Inactive: 1350664 kB' 'Active(anon): 132172 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1723328 kB' 'Mapped: 48720 kB' 'AnonPages: 123316 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66764 kB' 'Slab: 139760 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.012 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.271 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.272 node0=1024 expecting 1024 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.272 00:05:10.272 real 0m0.495s 00:05:10.272 user 0m0.250s 00:05:10.272 sys 0m0.278s 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.272 15:28:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:10.272 ************************************ 00:05:10.272 END TEST even_2G_alloc 00:05:10.272 ************************************ 00:05:10.272 15:28:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:10.272 15:28:05 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:10.272 15:28:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.272 15:28:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.272 15:28:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:10.272 ************************************ 00:05:10.272 START TEST odd_alloc 00:05:10.272 ************************************ 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.272 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.535 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.535 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7954808 kB' 'MemAvailable: 9463892 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494680 kB' 'Inactive: 1350664 kB' 'Active(anon): 132480 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123592 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139764 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 73000 kB' 'KernelStack: 6244 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.535 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7954808 kB' 'MemAvailable: 9463892 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494428 kB' 'Inactive: 1350664 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123392 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139772 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 73008 kB' 'KernelStack: 6288 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.536 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.537 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7954808 kB' 'MemAvailable: 9463892 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494616 kB' 'Inactive: 1350664 kB' 'Active(anon): 132416 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123580 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139772 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 73008 kB' 'KernelStack: 6256 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.538 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.539 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:10.540 nr_hugepages=1025 00:05:10.540 resv_hugepages=0 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.540 surplus_hugepages=0 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.540 anon_hugepages=0 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7954808 kB' 'MemAvailable: 9463892 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494592 kB' 'Inactive: 1350664 kB' 'Active(anon): 132392 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123544 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139768 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 73004 kB' 'KernelStack: 6240 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.540 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.541 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7954808 kB' 'MemUsed: 4287160 kB' 'SwapCached: 0 kB' 'Active: 494544 kB' 'Inactive: 1350664 kB' 'Active(anon): 132344 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1723328 kB' 'Mapped: 48720 kB' 'AnonPages: 123460 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66764 kB' 'Slab: 139764 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 73000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.542 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.543 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.802 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.802 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.802 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.802 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.802 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.802 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.802 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.802 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.802 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.803 node0=1025 expecting 1025 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:10.803 00:05:10.803 real 0m0.490s 00:05:10.803 user 0m0.250s 00:05:10.803 sys 0m0.271s 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.803 15:28:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:10.803 ************************************ 00:05:10.803 END TEST odd_alloc 00:05:10.803 ************************************ 00:05:10.803 15:28:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:10.803 15:28:05 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:10.803 15:28:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.803 15:28:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.803 15:28:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:10.803 ************************************ 00:05:10.803 START TEST custom_alloc 00:05:10.803 ************************************ 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:10.803 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.804 15:28:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.067 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.067 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9012748 kB' 'MemAvailable: 10521832 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494784 kB' 'Inactive: 1350664 kB' 'Active(anon): 132584 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123680 kB' 'Mapped: 49008 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139736 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72972 kB' 'KernelStack: 6244 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.067 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9012748 kB' 'MemAvailable: 10521832 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494216 kB' 'Inactive: 1350664 kB' 'Active(anon): 132016 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123132 kB' 'Mapped: 48780 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139736 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72972 kB' 'KernelStack: 6228 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.068 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.069 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9012748 kB' 'MemAvailable: 10521832 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494216 kB' 'Inactive: 1350664 kB' 'Active(anon): 132016 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123392 kB' 'Mapped: 48780 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139736 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72972 kB' 'KernelStack: 6228 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.070 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.071 nr_hugepages=512 00:05:11.071 resv_hugepages=0 00:05:11.071 surplus_hugepages=0 00:05:11.071 anon_hugepages=0 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9012748 kB' 'MemAvailable: 10521832 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494548 kB' 'Inactive: 1350664 kB' 'Active(anon): 132348 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123456 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139704 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72940 kB' 'KernelStack: 6272 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.071 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.333 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9012748 kB' 'MemUsed: 3229220 kB' 'SwapCached: 0 kB' 'Active: 494508 kB' 'Inactive: 1350664 kB' 'Active(anon): 132308 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1723328 kB' 'Mapped: 48720 kB' 'AnonPages: 123424 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66764 kB' 'Slab: 139704 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.334 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.335 node0=512 expecting 512 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:11.335 00:05:11.335 real 0m0.513s 00:05:11.335 user 0m0.253s 00:05:11.335 sys 0m0.279s 00:05:11.335 ************************************ 00:05:11.335 END TEST custom_alloc 00:05:11.335 ************************************ 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.335 15:28:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:11.335 15:28:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:11.335 15:28:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:11.335 15:28:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.335 15:28:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.335 15:28:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.335 ************************************ 00:05:11.335 START TEST no_shrink_alloc 00:05:11.335 ************************************ 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.335 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.599 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.599 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7968048 kB' 'MemAvailable: 9477132 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494984 kB' 'Inactive: 1350664 kB' 'Active(anon): 132784 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123856 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139680 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72916 kB' 'KernelStack: 6212 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.599 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.600 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7967800 kB' 'MemAvailable: 9476884 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494776 kB' 'Inactive: 1350664 kB' 'Active(anon): 132576 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123700 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139680 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72916 kB' 'KernelStack: 6240 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.601 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.602 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7967800 kB' 'MemAvailable: 9476884 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494488 kB' 'Inactive: 1350664 kB' 'Active(anon): 132288 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123428 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139680 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72916 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.603 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.604 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.605 nr_hugepages=1024 00:05:11.605 resv_hugepages=0 00:05:11.605 surplus_hugepages=0 00:05:11.605 anon_hugepages=0 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7968464 kB' 'MemAvailable: 9477548 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494188 kB' 'Inactive: 1350664 kB' 'Active(anon): 131988 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123092 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139680 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72916 kB' 'KernelStack: 6240 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.605 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.606 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7968464 kB' 'MemUsed: 4273504 kB' 'SwapCached: 0 kB' 'Active: 494188 kB' 'Inactive: 1350664 kB' 'Active(anon): 131988 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1723328 kB' 'Mapped: 48720 kB' 'AnonPages: 123352 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66764 kB' 'Slab: 139680 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.867 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.868 node0=1024 expecting 1024 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:11.868 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.869 15:28:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.135 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.135 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.135 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.135 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7968864 kB' 'MemAvailable: 9477948 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 495364 kB' 'Inactive: 1350664 kB' 'Active(anon): 133164 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123864 kB' 'Mapped: 49016 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139680 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72916 kB' 'KernelStack: 6356 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.136 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7968864 kB' 'MemAvailable: 9477948 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494708 kB' 'Inactive: 1350664 kB' 'Active(anon): 132508 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123652 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139684 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6288 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.137 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.138 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7968612 kB' 'MemAvailable: 9477696 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494488 kB' 'Inactive: 1350664 kB' 'Active(anon): 132288 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139692 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72928 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.139 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.140 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.141 nr_hugepages=1024 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.141 resv_hugepages=0 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.141 surplus_hugepages=0 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.141 anon_hugepages=0 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7969544 kB' 'MemAvailable: 9478628 kB' 'Buffers: 3388 kB' 'Cached: 1719940 kB' 'SwapCached: 0 kB' 'Active: 494268 kB' 'Inactive: 1350664 kB' 'Active(anon): 132068 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123180 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66764 kB' 'Slab: 139692 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72928 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.141 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.142 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7969544 kB' 'MemUsed: 4272424 kB' 'SwapCached: 0 kB' 'Active: 494404 kB' 'Inactive: 1350664 kB' 'Active(anon): 132204 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1350664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1723328 kB' 'Mapped: 48720 kB' 'AnonPages: 123332 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66764 kB' 'Slab: 139684 kB' 'SReclaimable: 66764 kB' 'SUnreclaim: 72920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.143 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.144 node0=1024 expecting 1024 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.144 00:05:12.144 real 0m0.971s 00:05:12.144 user 0m0.500s 00:05:12.144 sys 0m0.530s 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.144 15:28:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.144 ************************************ 00:05:12.144 END TEST no_shrink_alloc 00:05:12.144 ************************************ 00:05:12.403 15:28:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:12.403 15:28:07 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:12.403 15:28:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:12.403 15:28:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.403 15:28:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.403 15:28:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.403 15:28:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.403 15:28:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.403 15:28:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:12.403 15:28:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:12.403 ************************************ 00:05:12.403 END TEST hugepages 00:05:12.403 ************************************ 00:05:12.403 00:05:12.403 real 0m4.352s 00:05:12.403 user 0m2.138s 00:05:12.403 sys 0m2.316s 00:05:12.403 15:28:07 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.403 15:28:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.403 15:28:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:12.403 15:28:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:12.403 15:28:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.403 15:28:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.403 15:28:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.403 ************************************ 00:05:12.403 START TEST driver 00:05:12.403 ************************************ 00:05:12.403 15:28:07 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:12.403 * Looking for test storage... 00:05:12.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:12.403 15:28:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:12.403 15:28:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.403 15:28:07 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.971 15:28:07 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:12.971 15:28:07 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.971 15:28:07 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.971 15:28:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:12.971 ************************************ 00:05:12.971 START TEST guess_driver 00:05:12.971 ************************************ 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:12.971 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:12.971 Looking for driver=uio_pci_generic 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.971 15:28:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.538 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:13.538 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:13.538 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.796 15:28:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.364 00:05:14.364 real 0m1.422s 00:05:14.364 user 0m0.536s 00:05:14.364 sys 0m0.882s 00:05:14.364 15:28:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.364 15:28:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.364 ************************************ 00:05:14.364 END TEST guess_driver 00:05:14.364 ************************************ 00:05:14.364 15:28:09 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:14.364 00:05:14.364 real 0m2.077s 00:05:14.364 user 0m0.753s 00:05:14.364 sys 0m1.355s 00:05:14.364 15:28:09 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.364 15:28:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.364 ************************************ 00:05:14.364 END TEST driver 00:05:14.364 ************************************ 00:05:14.364 15:28:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:14.364 15:28:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:14.364 15:28:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.364 15:28:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.364 15:28:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.364 ************************************ 00:05:14.364 START TEST devices 00:05:14.364 ************************************ 00:05:14.364 15:28:09 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:14.622 * Looking for test storage... 00:05:14.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.622 15:28:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:14.622 15:28:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:14.622 15:28:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.622 15:28:09 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:15.190 15:28:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:15.190 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:15.190 15:28:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:15.190 15:28:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:15.190 No valid GPT data, bailing 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:15.450 No valid GPT data, bailing 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:15.450 No valid GPT data, bailing 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:15.450 No valid GPT data, bailing 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:15.450 15:28:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:15.450 15:28:10 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:15.450 15:28:10 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:15.450 15:28:10 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.450 15:28:10 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.450 15:28:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:15.450 ************************************ 00:05:15.450 START TEST nvme_mount 00:05:15.450 ************************************ 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:15.450 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:15.709 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:15.709 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.709 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.709 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:15.709 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.709 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:15.709 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:15.709 15:28:10 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:16.644 Creating new GPT entries in memory. 00:05:16.644 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:16.644 other utilities. 00:05:16.644 15:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:16.644 15:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.644 15:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.644 15:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.644 15:28:11 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:17.578 Creating new GPT entries in memory. 00:05:17.578 The operation has completed successfully. 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58854 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.578 15:28:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.835 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:17.835 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:17.835 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:17.835 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.835 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:17.836 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.094 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.094 15:28:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:18.094 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.094 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:18.351 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:18.351 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:18.351 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:18.351 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.351 15:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.609 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.609 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:18.609 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:18.609 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.609 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.609 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.867 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:18.868 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.868 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.868 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:18.868 15:28:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.868 15:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.868 15:28:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:19.126 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.126 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:19.126 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:19.126 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.126 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.126 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:19.385 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:19.385 00:05:19.385 real 0m3.868s 00:05:19.385 user 0m0.656s 00:05:19.385 sys 0m0.970s 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.385 15:28:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:19.385 ************************************ 00:05:19.385 END TEST nvme_mount 00:05:19.385 ************************************ 00:05:19.385 15:28:14 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:19.385 15:28:14 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:19.385 15:28:14 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.385 15:28:14 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.385 15:28:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:19.385 ************************************ 00:05:19.385 START TEST dm_mount 00:05:19.385 ************************************ 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:19.385 15:28:14 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:20.759 Creating new GPT entries in memory. 00:05:20.759 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:20.759 other utilities. 00:05:20.759 15:28:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:20.759 15:28:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.759 15:28:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.759 15:28:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.759 15:28:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:21.691 Creating new GPT entries in memory. 00:05:21.691 The operation has completed successfully. 00:05:21.691 15:28:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:21.691 15:28:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.691 15:28:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:21.691 15:28:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:21.691 15:28:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:22.625 The operation has completed successfully. 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59287 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.625 15:28:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.882 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.882 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:22.882 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:22.882 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.882 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.882 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.882 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.882 15:28:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.140 15:28:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.397 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.397 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:23.397 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:23.397 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.397 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.397 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.397 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.398 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:23.656 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:23.656 00:05:23.656 real 0m4.197s 00:05:23.656 user 0m0.459s 00:05:23.656 sys 0m0.688s 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.656 ************************************ 00:05:23.656 END TEST dm_mount 00:05:23.656 ************************************ 00:05:23.656 15:28:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:23.656 15:28:18 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:23.656 15:28:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:23.656 15:28:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:23.656 15:28:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.656 15:28:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.656 15:28:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.656 15:28:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.656 15:28:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.912 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.912 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.912 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.912 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.912 15:28:19 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:23.912 15:28:19 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.912 15:28:19 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.912 15:28:19 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.912 15:28:19 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.912 15:28:19 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.912 15:28:19 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:23.912 00:05:23.912 real 0m9.567s 00:05:23.912 user 0m1.760s 00:05:23.912 sys 0m2.217s 00:05:23.912 15:28:19 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.912 ************************************ 00:05:23.912 END TEST devices 00:05:23.912 ************************************ 00:05:23.912 15:28:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:24.169 15:28:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:24.169 ************************************ 00:05:24.169 END TEST setup.sh 00:05:24.169 ************************************ 00:05:24.169 00:05:24.169 real 0m20.669s 00:05:24.169 user 0m6.715s 00:05:24.169 sys 0m8.452s 00:05:24.169 15:28:19 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.169 15:28:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:24.169 15:28:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.169 15:28:19 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:24.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.734 Hugepages 00:05:24.734 node hugesize free / total 00:05:24.734 node0 1048576kB 0 / 0 00:05:24.734 node0 2048kB 2048 / 2048 00:05:24.734 00:05:24.734 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:24.734 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:24.991 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:24.991 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:24.991 15:28:19 -- spdk/autotest.sh@130 -- # uname -s 00:05:24.991 15:28:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:24.991 15:28:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:24.991 15:28:19 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.553 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.811 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.811 15:28:20 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:26.744 15:28:21 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:26.744 15:28:21 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:26.744 15:28:21 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.744 15:28:21 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:26.744 15:28:21 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:26.744 15:28:21 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:26.744 15:28:21 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.744 15:28:21 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.744 15:28:21 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:26.744 15:28:21 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:26.744 15:28:21 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:26.744 15:28:21 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.310 Waiting for block devices as requested 00:05:27.310 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:27.310 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:27.310 15:28:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:27.310 15:28:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:27.310 15:28:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:27.310 15:28:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:27.310 15:28:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:27.310 15:28:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:27.310 15:28:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:27.310 15:28:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:27.310 15:28:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:27.310 15:28:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:27.310 15:28:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:27.310 15:28:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:27.310 15:28:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:27.310 15:28:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:27.310 15:28:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:27.310 15:28:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:27.310 15:28:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:27.310 15:28:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:27.310 15:28:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:27.310 15:28:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:27.310 15:28:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:27.310 15:28:22 -- common/autotest_common.sh@1557 -- # continue 00:05:27.310 15:28:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:27.310 15:28:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:27.310 15:28:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:27.310 15:28:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:27.569 15:28:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:27.569 15:28:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:27.569 15:28:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:27.569 15:28:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:27.569 15:28:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:27.569 15:28:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:27.569 15:28:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:27.569 15:28:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:27.569 15:28:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:27.569 15:28:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:27.569 15:28:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:27.569 15:28:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:27.569 15:28:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:27.569 15:28:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:27.569 15:28:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:27.569 15:28:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:27.569 15:28:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:27.569 15:28:22 -- common/autotest_common.sh@1557 -- # continue 00:05:27.569 15:28:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:27.569 15:28:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.569 15:28:22 -- common/autotest_common.sh@10 -- # set +x 00:05:27.569 15:28:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:27.569 15:28:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.569 15:28:22 -- common/autotest_common.sh@10 -- # set +x 00:05:27.569 15:28:22 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.393 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.393 15:28:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:28.393 15:28:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.393 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:05:28.393 15:28:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:28.393 15:28:23 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:28.393 15:28:23 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:28.393 15:28:23 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:28.393 15:28:23 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:28.393 15:28:23 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:28.393 15:28:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:28.393 15:28:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:28.393 15:28:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.393 15:28:23 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:28.393 15:28:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:28.393 15:28:23 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:28.393 15:28:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:28.393 15:28:23 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:28.393 15:28:23 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:28.393 15:28:23 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:28.393 15:28:23 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:28.393 15:28:23 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:28.393 15:28:23 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:28.393 15:28:23 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:28.393 15:28:23 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:28.393 15:28:23 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:28.393 15:28:23 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:28.393 15:28:23 -- common/autotest_common.sh@1593 -- # return 0 00:05:28.393 15:28:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:28.393 15:28:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:28.393 15:28:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:28.393 15:28:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:28.393 15:28:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:28.393 15:28:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.393 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:05:28.393 15:28:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:28.393 15:28:23 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:28.393 15:28:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.393 15:28:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.393 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:05:28.393 ************************************ 00:05:28.393 START TEST env 00:05:28.393 ************************************ 00:05:28.393 15:28:23 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:28.651 * Looking for test storage... 00:05:28.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:28.651 15:28:23 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:28.651 15:28:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.651 15:28:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.651 15:28:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.651 ************************************ 00:05:28.651 START TEST env_memory 00:05:28.651 ************************************ 00:05:28.651 15:28:23 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:28.651 00:05:28.651 00:05:28.651 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.651 http://cunit.sourceforge.net/ 00:05:28.651 00:05:28.651 00:05:28.651 Suite: memory 00:05:28.651 Test: alloc and free memory map ...[2024-07-15 15:28:23.631198] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:28.651 passed 00:05:28.651 Test: mem map translation ...[2024-07-15 15:28:23.662897] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:28.651 [2024-07-15 15:28:23.662954] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:28.651 [2024-07-15 15:28:23.663011] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:28.651 [2024-07-15 15:28:23.663022] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:28.651 passed 00:05:28.651 Test: mem map registration ...[2024-07-15 15:28:23.728177] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:28.651 [2024-07-15 15:28:23.728243] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:28.651 passed 00:05:28.909 Test: mem map adjacent registrations ...passed 00:05:28.909 00:05:28.909 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.909 suites 1 1 n/a 0 0 00:05:28.909 tests 4 4 4 0 0 00:05:28.909 asserts 152 152 152 0 n/a 00:05:28.909 00:05:28.909 Elapsed time = 0.218 seconds 00:05:28.909 00:05:28.909 real 0m0.238s 00:05:28.909 user 0m0.219s 00:05:28.909 sys 0m0.014s 00:05:28.909 15:28:23 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.909 ************************************ 00:05:28.909 END TEST env_memory 00:05:28.909 ************************************ 00:05:28.909 15:28:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:28.909 15:28:23 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.909 15:28:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:28.909 15:28:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.909 15:28:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.909 15:28:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.909 ************************************ 00:05:28.909 START TEST env_vtophys 00:05:28.909 ************************************ 00:05:28.909 15:28:23 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:28.909 EAL: lib.eal log level changed from notice to debug 00:05:28.909 EAL: Detected lcore 0 as core 0 on socket 0 00:05:28.909 EAL: Detected lcore 1 as core 0 on socket 0 00:05:28.909 EAL: Detected lcore 2 as core 0 on socket 0 00:05:28.909 EAL: Detected lcore 3 as core 0 on socket 0 00:05:28.909 EAL: Detected lcore 4 as core 0 on socket 0 00:05:28.909 EAL: Detected lcore 5 as core 0 on socket 0 00:05:28.909 EAL: Detected lcore 6 as core 0 on socket 0 00:05:28.909 EAL: Detected lcore 7 as core 0 on socket 0 00:05:28.909 EAL: Detected lcore 8 as core 0 on socket 0 00:05:28.909 EAL: Detected lcore 9 as core 0 on socket 0 00:05:28.909 EAL: Maximum logical cores by configuration: 128 00:05:28.909 EAL: Detected CPU lcores: 10 00:05:28.909 EAL: Detected NUMA nodes: 1 00:05:28.909 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:28.909 EAL: Detected shared linkage of DPDK 00:05:28.909 EAL: No shared files mode enabled, IPC will be disabled 00:05:28.909 EAL: Selected IOVA mode 'PA' 00:05:28.909 EAL: Probing VFIO support... 00:05:28.909 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.909 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:28.909 EAL: Ask a virtual area of 0x2e000 bytes 00:05:28.909 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:28.909 EAL: Setting up physically contiguous memory... 00:05:28.909 EAL: Setting maximum number of open files to 524288 00:05:28.909 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:28.909 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:28.909 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.909 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:28.909 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.909 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.909 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:28.909 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:28.909 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.909 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:28.909 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.909 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.909 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:28.910 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:28.910 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.910 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:28.910 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.910 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.910 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:28.910 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:28.910 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.910 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:28.910 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.910 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.910 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:28.910 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:28.910 EAL: Hugepages will be freed exactly as allocated. 00:05:28.910 EAL: No shared files mode enabled, IPC is disabled 00:05:28.910 EAL: No shared files mode enabled, IPC is disabled 00:05:28.910 EAL: TSC frequency is ~2200000 KHz 00:05:28.910 EAL: Main lcore 0 is ready (tid=7fe9cc260a00;cpuset=[0]) 00:05:28.910 EAL: Trying to obtain current memory policy. 00:05:28.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.910 EAL: Restoring previous memory policy: 0 00:05:28.910 EAL: request: mp_malloc_sync 00:05:28.910 EAL: No shared files mode enabled, IPC is disabled 00:05:28.910 EAL: Heap on socket 0 was expanded by 2MB 00:05:28.910 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.910 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:28.910 EAL: Mem event callback 'spdk:(nil)' registered 00:05:28.910 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:28.910 00:05:28.910 00:05:28.910 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.910 http://cunit.sourceforge.net/ 00:05:28.910 00:05:28.910 00:05:28.910 Suite: components_suite 00:05:28.910 Test: vtophys_malloc_test ...passed 00:05:28.910 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:28.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.910 EAL: Restoring previous memory policy: 4 00:05:28.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.910 EAL: request: mp_malloc_sync 00:05:28.910 EAL: No shared files mode enabled, IPC is disabled 00:05:28.910 EAL: Heap on socket 0 was expanded by 4MB 00:05:28.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.910 EAL: request: mp_malloc_sync 00:05:28.910 EAL: No shared files mode enabled, IPC is disabled 00:05:28.910 EAL: Heap on socket 0 was shrunk by 4MB 00:05:28.910 EAL: Trying to obtain current memory policy. 00:05:28.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.910 EAL: Restoring previous memory policy: 4 00:05:28.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.910 EAL: request: mp_malloc_sync 00:05:28.910 EAL: No shared files mode enabled, IPC is disabled 00:05:28.910 EAL: Heap on socket 0 was expanded by 6MB 00:05:28.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.910 EAL: request: mp_malloc_sync 00:05:28.910 EAL: No shared files mode enabled, IPC is disabled 00:05:28.910 EAL: Heap on socket 0 was shrunk by 6MB 00:05:28.910 EAL: Trying to obtain current memory policy. 00:05:28.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.169 EAL: Restoring previous memory policy: 4 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was expanded by 10MB 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was shrunk by 10MB 00:05:29.169 EAL: Trying to obtain current memory policy. 00:05:29.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.169 EAL: Restoring previous memory policy: 4 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was expanded by 18MB 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was shrunk by 18MB 00:05:29.169 EAL: Trying to obtain current memory policy. 00:05:29.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.169 EAL: Restoring previous memory policy: 4 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was expanded by 34MB 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was shrunk by 34MB 00:05:29.169 EAL: Trying to obtain current memory policy. 00:05:29.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.169 EAL: Restoring previous memory policy: 4 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was expanded by 66MB 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was shrunk by 66MB 00:05:29.169 EAL: Trying to obtain current memory policy. 00:05:29.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.169 EAL: Restoring previous memory policy: 4 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was expanded by 130MB 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was shrunk by 130MB 00:05:29.169 EAL: Trying to obtain current memory policy. 00:05:29.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.169 EAL: Restoring previous memory policy: 4 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was expanded by 258MB 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was shrunk by 258MB 00:05:29.169 EAL: Trying to obtain current memory policy. 00:05:29.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.169 EAL: Restoring previous memory policy: 4 00:05:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.169 EAL: request: mp_malloc_sync 00:05:29.169 EAL: No shared files mode enabled, IPC is disabled 00:05:29.169 EAL: Heap on socket 0 was expanded by 514MB 00:05:29.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.428 EAL: request: mp_malloc_sync 00:05:29.428 EAL: No shared files mode enabled, IPC is disabled 00:05:29.428 EAL: Heap on socket 0 was shrunk by 514MB 00:05:29.428 EAL: Trying to obtain current memory policy. 00:05:29.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.428 EAL: Restoring previous memory policy: 4 00:05:29.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.428 EAL: request: mp_malloc_sync 00:05:29.428 EAL: No shared files mode enabled, IPC is disabled 00:05:29.428 EAL: Heap on socket 0 was expanded by 1026MB 00:05:29.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.687 EAL: request: mp_malloc_sync 00:05:29.687 EAL: No shared files mode enabled, IPC is disabled 00:05:29.687 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:29.687 passed 00:05:29.687 00:05:29.687 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.687 suites 1 1 n/a 0 0 00:05:29.687 tests 2 2 2 0 0 00:05:29.687 asserts 5358 5358 5358 0 n/a 00:05:29.687 00:05:29.687 Elapsed time = 0.679 seconds 00:05:29.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.687 EAL: request: mp_malloc_sync 00:05:29.687 EAL: No shared files mode enabled, IPC is disabled 00:05:29.687 EAL: Heap on socket 0 was shrunk by 2MB 00:05:29.687 EAL: No shared files mode enabled, IPC is disabled 00:05:29.687 EAL: No shared files mode enabled, IPC is disabled 00:05:29.687 EAL: No shared files mode enabled, IPC is disabled 00:05:29.687 00:05:29.687 real 0m0.871s 00:05:29.687 user 0m0.440s 00:05:29.687 sys 0m0.300s 00:05:29.688 15:28:24 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.688 15:28:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:29.688 ************************************ 00:05:29.688 END TEST env_vtophys 00:05:29.688 ************************************ 00:05:29.688 15:28:24 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.688 15:28:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.688 15:28:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.688 15:28:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.688 15:28:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.688 ************************************ 00:05:29.688 START TEST env_pci 00:05:29.688 ************************************ 00:05:29.688 15:28:24 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.688 00:05:29.688 00:05:29.688 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.688 http://cunit.sourceforge.net/ 00:05:29.688 00:05:29.688 00:05:29.688 Suite: pci 00:05:29.688 Test: pci_hook ...[2024-07-15 15:28:24.802466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60469 has claimed it 00:05:29.688 passed 00:05:29.688 00:05:29.688 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.688 suites 1 1 n/a 0 0 00:05:29.688 tests 1 1 1 0 0 00:05:29.688 asserts 25 25 25 0 n/a 00:05:29.688 00:05:29.688 Elapsed time = 0.002 secondsEAL: Cannot find device (10000:00:01.0) 00:05:29.688 EAL: Failed to attach device on primary process 00:05:29.688 00:05:29.688 00:05:29.688 real 0m0.020s 00:05:29.688 user 0m0.007s 00:05:29.688 sys 0m0.013s 00:05:29.688 15:28:24 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.688 15:28:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:29.688 ************************************ 00:05:29.688 END TEST env_pci 00:05:29.688 ************************************ 00:05:29.946 15:28:24 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.946 15:28:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:29.946 15:28:24 env -- env/env.sh@15 -- # uname 00:05:29.946 15:28:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:29.946 15:28:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:29.946 15:28:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.946 15:28:24 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:29.946 15:28:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.946 15:28:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.946 ************************************ 00:05:29.946 START TEST env_dpdk_post_init 00:05:29.946 ************************************ 00:05:29.946 15:28:24 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.946 EAL: Detected CPU lcores: 10 00:05:29.946 EAL: Detected NUMA nodes: 1 00:05:29.946 EAL: Detected shared linkage of DPDK 00:05:29.946 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.946 EAL: Selected IOVA mode 'PA' 00:05:29.946 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.946 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:29.946 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:29.946 Starting DPDK initialization... 00:05:29.946 Starting SPDK post initialization... 00:05:29.946 SPDK NVMe probe 00:05:29.946 Attaching to 0000:00:10.0 00:05:29.946 Attaching to 0000:00:11.0 00:05:29.946 Attached to 0000:00:10.0 00:05:29.946 Attached to 0000:00:11.0 00:05:29.946 Cleaning up... 00:05:29.946 00:05:29.946 real 0m0.179s 00:05:29.946 user 0m0.050s 00:05:29.946 sys 0m0.029s 00:05:29.946 15:28:25 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.946 ************************************ 00:05:29.946 END TEST env_dpdk_post_init 00:05:29.946 15:28:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.946 ************************************ 00:05:30.204 15:28:25 env -- common/autotest_common.sh@1142 -- # return 0 00:05:30.204 15:28:25 env -- env/env.sh@26 -- # uname 00:05:30.204 15:28:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:30.204 15:28:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:30.204 15:28:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.204 15:28:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.204 15:28:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.204 ************************************ 00:05:30.204 START TEST env_mem_callbacks 00:05:30.204 ************************************ 00:05:30.204 15:28:25 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:30.204 EAL: Detected CPU lcores: 10 00:05:30.204 EAL: Detected NUMA nodes: 1 00:05:30.204 EAL: Detected shared linkage of DPDK 00:05:30.204 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.204 EAL: Selected IOVA mode 'PA' 00:05:30.204 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:30.204 00:05:30.204 00:05:30.204 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.204 http://cunit.sourceforge.net/ 00:05:30.204 00:05:30.204 00:05:30.204 Suite: memory 00:05:30.204 Test: test ... 00:05:30.204 register 0x200000200000 2097152 00:05:30.204 malloc 3145728 00:05:30.204 register 0x200000400000 4194304 00:05:30.204 buf 0x200000500000 len 3145728 PASSED 00:05:30.204 malloc 64 00:05:30.204 buf 0x2000004fff40 len 64 PASSED 00:05:30.204 malloc 4194304 00:05:30.204 register 0x200000800000 6291456 00:05:30.204 buf 0x200000a00000 len 4194304 PASSED 00:05:30.204 free 0x200000500000 3145728 00:05:30.204 free 0x2000004fff40 64 00:05:30.204 unregister 0x200000400000 4194304 PASSED 00:05:30.204 free 0x200000a00000 4194304 00:05:30.204 unregister 0x200000800000 6291456 PASSED 00:05:30.204 malloc 8388608 00:05:30.204 register 0x200000400000 10485760 00:05:30.204 buf 0x200000600000 len 8388608 PASSED 00:05:30.204 free 0x200000600000 8388608 00:05:30.204 unregister 0x200000400000 10485760 PASSED 00:05:30.204 passed 00:05:30.204 00:05:30.204 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.204 suites 1 1 n/a 0 0 00:05:30.204 tests 1 1 1 0 0 00:05:30.204 asserts 15 15 15 0 n/a 00:05:30.204 00:05:30.204 Elapsed time = 0.007 seconds 00:05:30.204 00:05:30.204 real 0m0.145s 00:05:30.204 user 0m0.014s 00:05:30.204 sys 0m0.030s 00:05:30.204 15:28:25 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.204 15:28:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:30.204 ************************************ 00:05:30.204 END TEST env_mem_callbacks 00:05:30.204 ************************************ 00:05:30.204 15:28:25 env -- common/autotest_common.sh@1142 -- # return 0 00:05:30.204 00:05:30.204 real 0m1.784s 00:05:30.204 user 0m0.839s 00:05:30.204 sys 0m0.590s 00:05:30.204 15:28:25 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.204 15:28:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.204 ************************************ 00:05:30.204 END TEST env 00:05:30.204 ************************************ 00:05:30.204 15:28:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.204 15:28:25 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.204 15:28:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.204 15:28:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.204 15:28:25 -- common/autotest_common.sh@10 -- # set +x 00:05:30.204 ************************************ 00:05:30.204 START TEST rpc 00:05:30.204 ************************************ 00:05:30.204 15:28:25 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.463 * Looking for test storage... 00:05:30.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.463 15:28:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60584 00:05:30.463 15:28:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:30.463 15:28:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.463 15:28:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60584 00:05:30.463 15:28:25 rpc -- common/autotest_common.sh@829 -- # '[' -z 60584 ']' 00:05:30.463 15:28:25 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.463 15:28:25 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.463 15:28:25 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.463 15:28:25 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.463 15:28:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.463 [2024-07-15 15:28:25.499191] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:30.463 [2024-07-15 15:28:25.499326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:05:30.721 [2024-07-15 15:28:25.639269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.721 [2024-07-15 15:28:25.713600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.721 [2024-07-15 15:28:25.713681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60584' to capture a snapshot of events at runtime. 00:05:30.721 [2024-07-15 15:28:25.713696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:30.721 [2024-07-15 15:28:25.713706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:30.721 [2024-07-15 15:28:25.713715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60584 for offline analysis/debug. 00:05:30.721 [2024-07-15 15:28:25.713760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.656 15:28:26 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.656 15:28:26 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:31.656 15:28:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.656 15:28:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.656 15:28:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:31.656 15:28:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:31.656 15:28:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.656 15:28:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.656 15:28:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.656 ************************************ 00:05:31.656 START TEST rpc_integrity 00:05:31.656 ************************************ 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:31.656 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.656 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.656 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.656 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.656 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.656 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:31.656 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.656 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.656 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.656 { 00:05:31.656 "aliases": [ 00:05:31.656 "d61694b6-b882-4182-a636-23e4b2635d25" 00:05:31.656 ], 00:05:31.656 "assigned_rate_limits": { 00:05:31.656 "r_mbytes_per_sec": 0, 00:05:31.656 "rw_ios_per_sec": 0, 00:05:31.656 "rw_mbytes_per_sec": 0, 00:05:31.656 "w_mbytes_per_sec": 0 00:05:31.656 }, 00:05:31.656 "block_size": 512, 00:05:31.656 "claimed": false, 00:05:31.656 "driver_specific": {}, 00:05:31.656 "memory_domains": [ 00:05:31.656 { 00:05:31.656 "dma_device_id": "system", 00:05:31.656 "dma_device_type": 1 00:05:31.656 }, 00:05:31.656 { 00:05:31.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.656 "dma_device_type": 2 00:05:31.656 } 00:05:31.656 ], 00:05:31.656 "name": "Malloc0", 00:05:31.656 "num_blocks": 16384, 00:05:31.656 "product_name": "Malloc disk", 00:05:31.656 "supported_io_types": { 00:05:31.656 "abort": true, 00:05:31.656 "compare": false, 00:05:31.656 "compare_and_write": false, 00:05:31.656 "copy": true, 00:05:31.656 "flush": true, 00:05:31.656 "get_zone_info": false, 00:05:31.656 "nvme_admin": false, 00:05:31.656 "nvme_io": false, 00:05:31.656 "nvme_io_md": false, 00:05:31.656 "nvme_iov_md": false, 00:05:31.656 "read": true, 00:05:31.656 "reset": true, 00:05:31.656 "seek_data": false, 00:05:31.656 "seek_hole": false, 00:05:31.656 "unmap": true, 00:05:31.656 "write": true, 00:05:31.656 "write_zeroes": true, 00:05:31.656 "zcopy": true, 00:05:31.656 "zone_append": false, 00:05:31.657 "zone_management": false 00:05:31.657 }, 00:05:31.657 "uuid": "d61694b6-b882-4182-a636-23e4b2635d25", 00:05:31.657 "zoned": false 00:05:31.657 } 00:05:31.657 ]' 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.657 [2024-07-15 15:28:26.661576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:31.657 [2024-07-15 15:28:26.661635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.657 [2024-07-15 15:28:26.661657] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb81ad0 00:05:31.657 [2024-07-15 15:28:26.661666] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.657 [2024-07-15 15:28:26.663243] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.657 [2024-07-15 15:28:26.663280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.657 Passthru0 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.657 { 00:05:31.657 "aliases": [ 00:05:31.657 "d61694b6-b882-4182-a636-23e4b2635d25" 00:05:31.657 ], 00:05:31.657 "assigned_rate_limits": { 00:05:31.657 "r_mbytes_per_sec": 0, 00:05:31.657 "rw_ios_per_sec": 0, 00:05:31.657 "rw_mbytes_per_sec": 0, 00:05:31.657 "w_mbytes_per_sec": 0 00:05:31.657 }, 00:05:31.657 "block_size": 512, 00:05:31.657 "claim_type": "exclusive_write", 00:05:31.657 "claimed": true, 00:05:31.657 "driver_specific": {}, 00:05:31.657 "memory_domains": [ 00:05:31.657 { 00:05:31.657 "dma_device_id": "system", 00:05:31.657 "dma_device_type": 1 00:05:31.657 }, 00:05:31.657 { 00:05:31.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.657 "dma_device_type": 2 00:05:31.657 } 00:05:31.657 ], 00:05:31.657 "name": "Malloc0", 00:05:31.657 "num_blocks": 16384, 00:05:31.657 "product_name": "Malloc disk", 00:05:31.657 "supported_io_types": { 00:05:31.657 "abort": true, 00:05:31.657 "compare": false, 00:05:31.657 "compare_and_write": false, 00:05:31.657 "copy": true, 00:05:31.657 "flush": true, 00:05:31.657 "get_zone_info": false, 00:05:31.657 "nvme_admin": false, 00:05:31.657 "nvme_io": false, 00:05:31.657 "nvme_io_md": false, 00:05:31.657 "nvme_iov_md": false, 00:05:31.657 "read": true, 00:05:31.657 "reset": true, 00:05:31.657 "seek_data": false, 00:05:31.657 "seek_hole": false, 00:05:31.657 "unmap": true, 00:05:31.657 "write": true, 00:05:31.657 "write_zeroes": true, 00:05:31.657 "zcopy": true, 00:05:31.657 "zone_append": false, 00:05:31.657 "zone_management": false 00:05:31.657 }, 00:05:31.657 "uuid": "d61694b6-b882-4182-a636-23e4b2635d25", 00:05:31.657 "zoned": false 00:05:31.657 }, 00:05:31.657 { 00:05:31.657 "aliases": [ 00:05:31.657 "3b2e5a99-9e41-5a3f-97d3-fc0545585e65" 00:05:31.657 ], 00:05:31.657 "assigned_rate_limits": { 00:05:31.657 "r_mbytes_per_sec": 0, 00:05:31.657 "rw_ios_per_sec": 0, 00:05:31.657 "rw_mbytes_per_sec": 0, 00:05:31.657 "w_mbytes_per_sec": 0 00:05:31.657 }, 00:05:31.657 "block_size": 512, 00:05:31.657 "claimed": false, 00:05:31.657 "driver_specific": { 00:05:31.657 "passthru": { 00:05:31.657 "base_bdev_name": "Malloc0", 00:05:31.657 "name": "Passthru0" 00:05:31.657 } 00:05:31.657 }, 00:05:31.657 "memory_domains": [ 00:05:31.657 { 00:05:31.657 "dma_device_id": "system", 00:05:31.657 "dma_device_type": 1 00:05:31.657 }, 00:05:31.657 { 00:05:31.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.657 "dma_device_type": 2 00:05:31.657 } 00:05:31.657 ], 00:05:31.657 "name": "Passthru0", 00:05:31.657 "num_blocks": 16384, 00:05:31.657 "product_name": "passthru", 00:05:31.657 "supported_io_types": { 00:05:31.657 "abort": true, 00:05:31.657 "compare": false, 00:05:31.657 "compare_and_write": false, 00:05:31.657 "copy": true, 00:05:31.657 "flush": true, 00:05:31.657 "get_zone_info": false, 00:05:31.657 "nvme_admin": false, 00:05:31.657 "nvme_io": false, 00:05:31.657 "nvme_io_md": false, 00:05:31.657 "nvme_iov_md": false, 00:05:31.657 "read": true, 00:05:31.657 "reset": true, 00:05:31.657 "seek_data": false, 00:05:31.657 "seek_hole": false, 00:05:31.657 "unmap": true, 00:05:31.657 "write": true, 00:05:31.657 "write_zeroes": true, 00:05:31.657 "zcopy": true, 00:05:31.657 "zone_append": false, 00:05:31.657 "zone_management": false 00:05:31.657 }, 00:05:31.657 "uuid": "3b2e5a99-9e41-5a3f-97d3-fc0545585e65", 00:05:31.657 "zoned": false 00:05:31.657 } 00:05:31.657 ]' 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.657 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.657 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.915 ************************************ 00:05:31.915 END TEST rpc_integrity 00:05:31.915 ************************************ 00:05:31.915 15:28:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.915 00:05:31.915 real 0m0.335s 00:05:31.915 user 0m0.225s 00:05:31.915 sys 0m0.040s 00:05:31.915 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.915 15:28:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.915 15:28:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.915 15:28:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.915 15:28:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.915 15:28:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.915 15:28:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.915 ************************************ 00:05:31.915 START TEST rpc_plugins 00:05:31.915 ************************************ 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.915 { 00:05:31.915 "aliases": [ 00:05:31.915 "f7f47eb7-357e-4fa4-8994-23d81f487d76" 00:05:31.915 ], 00:05:31.915 "assigned_rate_limits": { 00:05:31.915 "r_mbytes_per_sec": 0, 00:05:31.915 "rw_ios_per_sec": 0, 00:05:31.915 "rw_mbytes_per_sec": 0, 00:05:31.915 "w_mbytes_per_sec": 0 00:05:31.915 }, 00:05:31.915 "block_size": 4096, 00:05:31.915 "claimed": false, 00:05:31.915 "driver_specific": {}, 00:05:31.915 "memory_domains": [ 00:05:31.915 { 00:05:31.915 "dma_device_id": "system", 00:05:31.915 "dma_device_type": 1 00:05:31.915 }, 00:05:31.915 { 00:05:31.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.915 "dma_device_type": 2 00:05:31.915 } 00:05:31.915 ], 00:05:31.915 "name": "Malloc1", 00:05:31.915 "num_blocks": 256, 00:05:31.915 "product_name": "Malloc disk", 00:05:31.915 "supported_io_types": { 00:05:31.915 "abort": true, 00:05:31.915 "compare": false, 00:05:31.915 "compare_and_write": false, 00:05:31.915 "copy": true, 00:05:31.915 "flush": true, 00:05:31.915 "get_zone_info": false, 00:05:31.915 "nvme_admin": false, 00:05:31.915 "nvme_io": false, 00:05:31.915 "nvme_io_md": false, 00:05:31.915 "nvme_iov_md": false, 00:05:31.915 "read": true, 00:05:31.915 "reset": true, 00:05:31.915 "seek_data": false, 00:05:31.915 "seek_hole": false, 00:05:31.915 "unmap": true, 00:05:31.915 "write": true, 00:05:31.915 "write_zeroes": true, 00:05:31.915 "zcopy": true, 00:05:31.915 "zone_append": false, 00:05:31.915 "zone_management": false 00:05:31.915 }, 00:05:31.915 "uuid": "f7f47eb7-357e-4fa4-8994-23d81f487d76", 00:05:31.915 "zoned": false 00:05:31.915 } 00:05:31.915 ]' 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.915 15:28:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.915 15:28:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:31.915 ************************************ 00:05:31.915 END TEST rpc_plugins 00:05:31.915 ************************************ 00:05:31.915 15:28:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.915 00:05:31.915 real 0m0.149s 00:05:31.915 user 0m0.105s 00:05:31.915 sys 0m0.008s 00:05:31.915 15:28:27 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.915 15:28:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.173 15:28:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:32.173 15:28:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:32.173 15:28:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.173 15:28:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.173 15:28:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.173 ************************************ 00:05:32.173 START TEST rpc_trace_cmd_test 00:05:32.173 ************************************ 00:05:32.173 15:28:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:32.173 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:32.173 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:32.173 15:28:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.173 15:28:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.173 15:28:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.173 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:32.173 "bdev": { 00:05:32.173 "mask": "0x8", 00:05:32.173 "tpoint_mask": "0xffffffffffffffff" 00:05:32.173 }, 00:05:32.173 "bdev_nvme": { 00:05:32.173 "mask": "0x4000", 00:05:32.173 "tpoint_mask": "0x0" 00:05:32.173 }, 00:05:32.173 "blobfs": { 00:05:32.173 "mask": "0x80", 00:05:32.173 "tpoint_mask": "0x0" 00:05:32.173 }, 00:05:32.173 "dsa": { 00:05:32.173 "mask": "0x200", 00:05:32.173 "tpoint_mask": "0x0" 00:05:32.173 }, 00:05:32.173 "ftl": { 00:05:32.173 "mask": "0x40", 00:05:32.173 "tpoint_mask": "0x0" 00:05:32.173 }, 00:05:32.173 "iaa": { 00:05:32.173 "mask": "0x1000", 00:05:32.173 "tpoint_mask": "0x0" 00:05:32.173 }, 00:05:32.173 "iscsi_conn": { 00:05:32.173 "mask": "0x2", 00:05:32.173 "tpoint_mask": "0x0" 00:05:32.173 }, 00:05:32.173 "nvme_pcie": { 00:05:32.173 "mask": "0x800", 00:05:32.173 "tpoint_mask": "0x0" 00:05:32.173 }, 00:05:32.173 "nvme_tcp": { 00:05:32.173 "mask": "0x2000", 00:05:32.173 "tpoint_mask": "0x0" 00:05:32.173 }, 00:05:32.173 "nvmf_rdma": { 00:05:32.173 "mask": "0x10", 00:05:32.173 "tpoint_mask": "0x0" 00:05:32.173 }, 00:05:32.174 "nvmf_tcp": { 00:05:32.174 "mask": "0x20", 00:05:32.174 "tpoint_mask": "0x0" 00:05:32.174 }, 00:05:32.174 "scsi": { 00:05:32.174 "mask": "0x4", 00:05:32.174 "tpoint_mask": "0x0" 00:05:32.174 }, 00:05:32.174 "sock": { 00:05:32.174 "mask": "0x8000", 00:05:32.174 "tpoint_mask": "0x0" 00:05:32.174 }, 00:05:32.174 "thread": { 00:05:32.174 "mask": "0x400", 00:05:32.174 "tpoint_mask": "0x0" 00:05:32.174 }, 00:05:32.174 "tpoint_group_mask": "0x8", 00:05:32.174 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60584" 00:05:32.174 }' 00:05:32.174 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:32.174 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:32.174 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:32.174 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:32.174 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:32.174 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:32.174 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:32.174 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:32.174 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:32.432 ************************************ 00:05:32.432 END TEST rpc_trace_cmd_test 00:05:32.432 ************************************ 00:05:32.432 15:28:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:32.432 00:05:32.432 real 0m0.262s 00:05:32.432 user 0m0.238s 00:05:32.432 sys 0m0.013s 00:05:32.432 15:28:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.432 15:28:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.432 15:28:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:32.432 15:28:27 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:32.432 15:28:27 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:32.432 15:28:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.432 15:28:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.432 15:28:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.432 ************************************ 00:05:32.432 START TEST go_rpc 00:05:32.432 ************************************ 00:05:32.432 15:28:27 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:05:32.432 15:28:27 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:32.432 15:28:27 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:32.432 15:28:27 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:32.432 15:28:27 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:32.432 15:28:27 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.432 15:28:27 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.432 15:28:27 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.432 15:28:27 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.432 15:28:27 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:32.432 15:28:27 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:32.433 15:28:27 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["c86f734f-e474-405b-9b45-91499897f85e"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"c86f734f-e474-405b-9b45-91499897f85e","zoned":false}]' 00:05:32.433 15:28:27 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:32.433 15:28:27 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:32.433 15:28:27 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:32.433 15:28:27 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.433 15:28:27 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.433 15:28:27 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.433 15:28:27 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:32.433 15:28:27 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:32.433 15:28:27 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:32.692 15:28:27 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:32.692 00:05:32.692 real 0m0.205s 00:05:32.692 user 0m0.145s 00:05:32.692 sys 0m0.028s 00:05:32.692 15:28:27 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.692 15:28:27 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.692 ************************************ 00:05:32.692 END TEST go_rpc 00:05:32.692 ************************************ 00:05:32.692 15:28:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:32.692 15:28:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:32.692 15:28:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:32.692 15:28:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.692 15:28:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.692 15:28:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.692 ************************************ 00:05:32.692 START TEST rpc_daemon_integrity 00:05:32.692 ************************************ 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.692 { 00:05:32.692 "aliases": [ 00:05:32.692 "7cb6c68f-aa9b-4741-b0fc-e1cb6cdba7a0" 00:05:32.692 ], 00:05:32.692 "assigned_rate_limits": { 00:05:32.692 "r_mbytes_per_sec": 0, 00:05:32.692 "rw_ios_per_sec": 0, 00:05:32.692 "rw_mbytes_per_sec": 0, 00:05:32.692 "w_mbytes_per_sec": 0 00:05:32.692 }, 00:05:32.692 "block_size": 512, 00:05:32.692 "claimed": false, 00:05:32.692 "driver_specific": {}, 00:05:32.692 "memory_domains": [ 00:05:32.692 { 00:05:32.692 "dma_device_id": "system", 00:05:32.692 "dma_device_type": 1 00:05:32.692 }, 00:05:32.692 { 00:05:32.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.692 "dma_device_type": 2 00:05:32.692 } 00:05:32.692 ], 00:05:32.692 "name": "Malloc3", 00:05:32.692 "num_blocks": 16384, 00:05:32.692 "product_name": "Malloc disk", 00:05:32.692 "supported_io_types": { 00:05:32.692 "abort": true, 00:05:32.692 "compare": false, 00:05:32.692 "compare_and_write": false, 00:05:32.692 "copy": true, 00:05:32.692 "flush": true, 00:05:32.692 "get_zone_info": false, 00:05:32.692 "nvme_admin": false, 00:05:32.692 "nvme_io": false, 00:05:32.692 "nvme_io_md": false, 00:05:32.692 "nvme_iov_md": false, 00:05:32.692 "read": true, 00:05:32.692 "reset": true, 00:05:32.692 "seek_data": false, 00:05:32.692 "seek_hole": false, 00:05:32.692 "unmap": true, 00:05:32.692 "write": true, 00:05:32.692 "write_zeroes": true, 00:05:32.692 "zcopy": true, 00:05:32.692 "zone_append": false, 00:05:32.692 "zone_management": false 00:05:32.692 }, 00:05:32.692 "uuid": "7cb6c68f-aa9b-4741-b0fc-e1cb6cdba7a0", 00:05:32.692 "zoned": false 00:05:32.692 } 00:05:32.692 ]' 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.692 [2024-07-15 15:28:27.757937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:32.692 [2024-07-15 15:28:27.757994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.692 [2024-07-15 15:28:27.758016] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd78d70 00:05:32.692 [2024-07-15 15:28:27.758026] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.692 [2024-07-15 15:28:27.759538] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.692 [2024-07-15 15:28:27.759579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.692 Passthru0 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.692 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.692 { 00:05:32.692 "aliases": [ 00:05:32.692 "7cb6c68f-aa9b-4741-b0fc-e1cb6cdba7a0" 00:05:32.692 ], 00:05:32.692 "assigned_rate_limits": { 00:05:32.692 "r_mbytes_per_sec": 0, 00:05:32.692 "rw_ios_per_sec": 0, 00:05:32.692 "rw_mbytes_per_sec": 0, 00:05:32.692 "w_mbytes_per_sec": 0 00:05:32.692 }, 00:05:32.692 "block_size": 512, 00:05:32.692 "claim_type": "exclusive_write", 00:05:32.692 "claimed": true, 00:05:32.692 "driver_specific": {}, 00:05:32.692 "memory_domains": [ 00:05:32.692 { 00:05:32.692 "dma_device_id": "system", 00:05:32.692 "dma_device_type": 1 00:05:32.692 }, 00:05:32.692 { 00:05:32.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.692 "dma_device_type": 2 00:05:32.692 } 00:05:32.692 ], 00:05:32.692 "name": "Malloc3", 00:05:32.692 "num_blocks": 16384, 00:05:32.692 "product_name": "Malloc disk", 00:05:32.692 "supported_io_types": { 00:05:32.692 "abort": true, 00:05:32.692 "compare": false, 00:05:32.692 "compare_and_write": false, 00:05:32.692 "copy": true, 00:05:32.692 "flush": true, 00:05:32.692 "get_zone_info": false, 00:05:32.693 "nvme_admin": false, 00:05:32.693 "nvme_io": false, 00:05:32.693 "nvme_io_md": false, 00:05:32.693 "nvme_iov_md": false, 00:05:32.693 "read": true, 00:05:32.693 "reset": true, 00:05:32.693 "seek_data": false, 00:05:32.693 "seek_hole": false, 00:05:32.693 "unmap": true, 00:05:32.693 "write": true, 00:05:32.693 "write_zeroes": true, 00:05:32.693 "zcopy": true, 00:05:32.693 "zone_append": false, 00:05:32.693 "zone_management": false 00:05:32.693 }, 00:05:32.693 "uuid": "7cb6c68f-aa9b-4741-b0fc-e1cb6cdba7a0", 00:05:32.693 "zoned": false 00:05:32.693 }, 00:05:32.693 { 00:05:32.693 "aliases": [ 00:05:32.693 "5f5458a2-ee21-573c-9610-1fbc3fdb6c87" 00:05:32.693 ], 00:05:32.693 "assigned_rate_limits": { 00:05:32.693 "r_mbytes_per_sec": 0, 00:05:32.693 "rw_ios_per_sec": 0, 00:05:32.693 "rw_mbytes_per_sec": 0, 00:05:32.693 "w_mbytes_per_sec": 0 00:05:32.693 }, 00:05:32.693 "block_size": 512, 00:05:32.693 "claimed": false, 00:05:32.693 "driver_specific": { 00:05:32.693 "passthru": { 00:05:32.693 "base_bdev_name": "Malloc3", 00:05:32.693 "name": "Passthru0" 00:05:32.693 } 00:05:32.693 }, 00:05:32.693 "memory_domains": [ 00:05:32.693 { 00:05:32.693 "dma_device_id": "system", 00:05:32.693 "dma_device_type": 1 00:05:32.693 }, 00:05:32.693 { 00:05:32.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.693 "dma_device_type": 2 00:05:32.693 } 00:05:32.693 ], 00:05:32.693 "name": "Passthru0", 00:05:32.693 "num_blocks": 16384, 00:05:32.693 "product_name": "passthru", 00:05:32.693 "supported_io_types": { 00:05:32.693 "abort": true, 00:05:32.693 "compare": false, 00:05:32.693 "compare_and_write": false, 00:05:32.693 "copy": true, 00:05:32.693 "flush": true, 00:05:32.693 "get_zone_info": false, 00:05:32.693 "nvme_admin": false, 00:05:32.693 "nvme_io": false, 00:05:32.693 "nvme_io_md": false, 00:05:32.693 "nvme_iov_md": false, 00:05:32.693 "read": true, 00:05:32.693 "reset": true, 00:05:32.693 "seek_data": false, 00:05:32.693 "seek_hole": false, 00:05:32.693 "unmap": true, 00:05:32.693 "write": true, 00:05:32.693 "write_zeroes": true, 00:05:32.693 "zcopy": true, 00:05:32.693 "zone_append": false, 00:05:32.693 "zone_management": false 00:05:32.693 }, 00:05:32.693 "uuid": "5f5458a2-ee21-573c-9610-1fbc3fdb6c87", 00:05:32.693 "zoned": false 00:05:32.693 } 00:05:32.693 ]' 00:05:32.693 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.952 ************************************ 00:05:32.952 END TEST rpc_daemon_integrity 00:05:32.952 ************************************ 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.952 00:05:32.952 real 0m0.316s 00:05:32.952 user 0m0.211s 00:05:32.952 sys 0m0.035s 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.952 15:28:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:32.952 15:28:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:32.952 15:28:27 rpc -- rpc/rpc.sh@84 -- # killprocess 60584 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@948 -- # '[' -z 60584 ']' 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@952 -- # kill -0 60584 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@953 -- # uname 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60584 00:05:32.952 killing process with pid 60584 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60584' 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@967 -- # kill 60584 00:05:32.952 15:28:27 rpc -- common/autotest_common.sh@972 -- # wait 60584 00:05:33.234 00:05:33.234 real 0m2.912s 00:05:33.234 user 0m4.009s 00:05:33.234 sys 0m0.606s 00:05:33.234 15:28:28 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.234 ************************************ 00:05:33.234 END TEST rpc 00:05:33.234 ************************************ 00:05:33.234 15:28:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.234 15:28:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:33.234 15:28:28 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:33.234 15:28:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.234 15:28:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.234 15:28:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.234 ************************************ 00:05:33.234 START TEST skip_rpc 00:05:33.234 ************************************ 00:05:33.234 15:28:28 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:33.234 * Looking for test storage... 00:05:33.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.234 15:28:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.234 15:28:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:33.234 15:28:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:33.234 15:28:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.234 15:28:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.234 15:28:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.234 ************************************ 00:05:33.234 START TEST skip_rpc 00:05:33.234 ************************************ 00:05:33.234 15:28:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:33.234 15:28:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60845 00:05:33.234 15:28:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:33.234 15:28:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.234 15:28:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:33.492 [2024-07-15 15:28:28.418227] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:33.493 [2024-07-15 15:28:28.418317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60845 ] 00:05:33.493 [2024-07-15 15:28:28.552941] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.493 [2024-07-15 15:28:28.616853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 2024/07/15 15:28:33 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60845 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60845 ']' 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60845 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60845 00:05:38.759 killing process with pid 60845 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60845' 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60845 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60845 00:05:38.759 ************************************ 00:05:38.759 END TEST skip_rpc 00:05:38.759 ************************************ 00:05:38.759 00:05:38.759 real 0m5.299s 00:05:38.759 user 0m5.034s 00:05:38.759 sys 0m0.168s 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.759 15:28:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 15:28:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.759 15:28:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:38.759 15:28:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.759 15:28:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.759 15:28:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.759 ************************************ 00:05:38.759 START TEST skip_rpc_with_json 00:05:38.759 ************************************ 00:05:38.759 15:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:38.759 15:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:38.759 15:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60932 00:05:38.759 15:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.759 15:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60932 00:05:38.759 15:28:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.760 15:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 60932 ']' 00:05:38.760 15:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.760 15:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.760 15:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.760 15:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.760 15:28:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.760 [2024-07-15 15:28:33.739857] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:38.760 [2024-07-15 15:28:33.739958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60932 ] 00:05:38.760 [2024-07-15 15:28:33.877077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.018 [2024-07-15 15:28:33.947148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.018 [2024-07-15 15:28:34.120570] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:39.018 2024/07/15 15:28:34 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:39.018 request: 00:05:39.018 { 00:05:39.018 "method": "nvmf_get_transports", 00:05:39.018 "params": { 00:05:39.018 "trtype": "tcp" 00:05:39.018 } 00:05:39.018 } 00:05:39.018 Got JSON-RPC error response 00:05:39.018 GoRPCClient: error on JSON-RPC call 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.018 [2024-07-15 15:28:34.132696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.018 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.277 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.277 15:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.277 { 00:05:39.277 "subsystems": [ 00:05:39.277 { 00:05:39.277 "subsystem": "keyring", 00:05:39.277 "config": [] 00:05:39.277 }, 00:05:39.277 { 00:05:39.277 "subsystem": "iobuf", 00:05:39.277 "config": [ 00:05:39.277 { 00:05:39.277 "method": "iobuf_set_options", 00:05:39.277 "params": { 00:05:39.277 "large_bufsize": 135168, 00:05:39.277 "large_pool_count": 1024, 00:05:39.277 "small_bufsize": 8192, 00:05:39.277 "small_pool_count": 8192 00:05:39.277 } 00:05:39.277 } 00:05:39.277 ] 00:05:39.277 }, 00:05:39.277 { 00:05:39.277 "subsystem": "sock", 00:05:39.277 "config": [ 00:05:39.277 { 00:05:39.277 "method": "sock_set_default_impl", 00:05:39.277 "params": { 00:05:39.277 "impl_name": "posix" 00:05:39.277 } 00:05:39.277 }, 00:05:39.277 { 00:05:39.277 "method": "sock_impl_set_options", 00:05:39.277 "params": { 00:05:39.277 "enable_ktls": false, 00:05:39.277 "enable_placement_id": 0, 00:05:39.277 "enable_quickack": false, 00:05:39.277 "enable_recv_pipe": true, 00:05:39.277 "enable_zerocopy_send_client": false, 00:05:39.277 "enable_zerocopy_send_server": true, 00:05:39.277 "impl_name": "ssl", 00:05:39.277 "recv_buf_size": 4096, 00:05:39.277 "send_buf_size": 4096, 00:05:39.277 "tls_version": 0, 00:05:39.277 "zerocopy_threshold": 0 00:05:39.277 } 00:05:39.277 }, 00:05:39.277 { 00:05:39.277 "method": "sock_impl_set_options", 00:05:39.277 "params": { 00:05:39.277 "enable_ktls": false, 00:05:39.277 "enable_placement_id": 0, 00:05:39.277 "enable_quickack": false, 00:05:39.277 "enable_recv_pipe": true, 00:05:39.277 "enable_zerocopy_send_client": false, 00:05:39.277 "enable_zerocopy_send_server": true, 00:05:39.277 "impl_name": "posix", 00:05:39.277 "recv_buf_size": 2097152, 00:05:39.277 "send_buf_size": 2097152, 00:05:39.277 "tls_version": 0, 00:05:39.277 "zerocopy_threshold": 0 00:05:39.277 } 00:05:39.277 } 00:05:39.277 ] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "vmd", 00:05:39.278 "config": [] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "accel", 00:05:39.278 "config": [ 00:05:39.278 { 00:05:39.278 "method": "accel_set_options", 00:05:39.278 "params": { 00:05:39.278 "buf_count": 2048, 00:05:39.278 "large_cache_size": 16, 00:05:39.278 "sequence_count": 2048, 00:05:39.278 "small_cache_size": 128, 00:05:39.278 "task_count": 2048 00:05:39.278 } 00:05:39.278 } 00:05:39.278 ] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "bdev", 00:05:39.278 "config": [ 00:05:39.278 { 00:05:39.278 "method": "bdev_set_options", 00:05:39.278 "params": { 00:05:39.278 "bdev_auto_examine": true, 00:05:39.278 "bdev_io_cache_size": 256, 00:05:39.278 "bdev_io_pool_size": 65535, 00:05:39.278 "iobuf_large_cache_size": 16, 00:05:39.278 "iobuf_small_cache_size": 128 00:05:39.278 } 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "method": "bdev_raid_set_options", 00:05:39.278 "params": { 00:05:39.278 "process_window_size_kb": 1024 00:05:39.278 } 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "method": "bdev_iscsi_set_options", 00:05:39.278 "params": { 00:05:39.278 "timeout_sec": 30 00:05:39.278 } 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "method": "bdev_nvme_set_options", 00:05:39.278 "params": { 00:05:39.278 "action_on_timeout": "none", 00:05:39.278 "allow_accel_sequence": false, 00:05:39.278 "arbitration_burst": 0, 00:05:39.278 "bdev_retry_count": 3, 00:05:39.278 "ctrlr_loss_timeout_sec": 0, 00:05:39.278 "delay_cmd_submit": true, 00:05:39.278 "dhchap_dhgroups": [ 00:05:39.278 "null", 00:05:39.278 "ffdhe2048", 00:05:39.278 "ffdhe3072", 00:05:39.278 "ffdhe4096", 00:05:39.278 "ffdhe6144", 00:05:39.278 "ffdhe8192" 00:05:39.278 ], 00:05:39.278 "dhchap_digests": [ 00:05:39.278 "sha256", 00:05:39.278 "sha384", 00:05:39.278 "sha512" 00:05:39.278 ], 00:05:39.278 "disable_auto_failback": false, 00:05:39.278 "fast_io_fail_timeout_sec": 0, 00:05:39.278 "generate_uuids": false, 00:05:39.278 "high_priority_weight": 0, 00:05:39.278 "io_path_stat": false, 00:05:39.278 "io_queue_requests": 0, 00:05:39.278 "keep_alive_timeout_ms": 10000, 00:05:39.278 "low_priority_weight": 0, 00:05:39.278 "medium_priority_weight": 0, 00:05:39.278 "nvme_adminq_poll_period_us": 10000, 00:05:39.278 "nvme_error_stat": false, 00:05:39.278 "nvme_ioq_poll_period_us": 0, 00:05:39.278 "rdma_cm_event_timeout_ms": 0, 00:05:39.278 "rdma_max_cq_size": 0, 00:05:39.278 "rdma_srq_size": 0, 00:05:39.278 "reconnect_delay_sec": 0, 00:05:39.278 "timeout_admin_us": 0, 00:05:39.278 "timeout_us": 0, 00:05:39.278 "transport_ack_timeout": 0, 00:05:39.278 "transport_retry_count": 4, 00:05:39.278 "transport_tos": 0 00:05:39.278 } 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "method": "bdev_nvme_set_hotplug", 00:05:39.278 "params": { 00:05:39.278 "enable": false, 00:05:39.278 "period_us": 100000 00:05:39.278 } 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "method": "bdev_wait_for_examine" 00:05:39.278 } 00:05:39.278 ] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "scsi", 00:05:39.278 "config": null 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "scheduler", 00:05:39.278 "config": [ 00:05:39.278 { 00:05:39.278 "method": "framework_set_scheduler", 00:05:39.278 "params": { 00:05:39.278 "name": "static" 00:05:39.278 } 00:05:39.278 } 00:05:39.278 ] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "vhost_scsi", 00:05:39.278 "config": [] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "vhost_blk", 00:05:39.278 "config": [] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "ublk", 00:05:39.278 "config": [] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "nbd", 00:05:39.278 "config": [] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "nvmf", 00:05:39.278 "config": [ 00:05:39.278 { 00:05:39.278 "method": "nvmf_set_config", 00:05:39.278 "params": { 00:05:39.278 "admin_cmd_passthru": { 00:05:39.278 "identify_ctrlr": false 00:05:39.278 }, 00:05:39.278 "discovery_filter": "match_any" 00:05:39.278 } 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "method": "nvmf_set_max_subsystems", 00:05:39.278 "params": { 00:05:39.278 "max_subsystems": 1024 00:05:39.278 } 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "method": "nvmf_set_crdt", 00:05:39.278 "params": { 00:05:39.278 "crdt1": 0, 00:05:39.278 "crdt2": 0, 00:05:39.278 "crdt3": 0 00:05:39.278 } 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "method": "nvmf_create_transport", 00:05:39.278 "params": { 00:05:39.278 "abort_timeout_sec": 1, 00:05:39.278 "ack_timeout": 0, 00:05:39.278 "buf_cache_size": 4294967295, 00:05:39.278 "c2h_success": true, 00:05:39.278 "data_wr_pool_size": 0, 00:05:39.278 "dif_insert_or_strip": false, 00:05:39.278 "in_capsule_data_size": 4096, 00:05:39.278 "io_unit_size": 131072, 00:05:39.278 "max_aq_depth": 128, 00:05:39.278 "max_io_qpairs_per_ctrlr": 127, 00:05:39.278 "max_io_size": 131072, 00:05:39.278 "max_queue_depth": 128, 00:05:39.278 "num_shared_buffers": 511, 00:05:39.278 "sock_priority": 0, 00:05:39.278 "trtype": "TCP", 00:05:39.278 "zcopy": false 00:05:39.278 } 00:05:39.278 } 00:05:39.278 ] 00:05:39.278 }, 00:05:39.278 { 00:05:39.278 "subsystem": "iscsi", 00:05:39.278 "config": [ 00:05:39.278 { 00:05:39.278 "method": "iscsi_set_options", 00:05:39.278 "params": { 00:05:39.278 "allow_duplicated_isid": false, 00:05:39.278 "chap_group": 0, 00:05:39.278 "data_out_pool_size": 2048, 00:05:39.278 "default_time2retain": 20, 00:05:39.278 "default_time2wait": 2, 00:05:39.278 "disable_chap": false, 00:05:39.278 "error_recovery_level": 0, 00:05:39.278 "first_burst_length": 8192, 00:05:39.278 "immediate_data": true, 00:05:39.278 "immediate_data_pool_size": 16384, 00:05:39.278 "max_connections_per_session": 2, 00:05:39.278 "max_large_datain_per_connection": 64, 00:05:39.278 "max_queue_depth": 64, 00:05:39.278 "max_r2t_per_connection": 4, 00:05:39.278 "max_sessions": 128, 00:05:39.278 "mutual_chap": false, 00:05:39.278 "node_base": "iqn.2016-06.io.spdk", 00:05:39.278 "nop_in_interval": 30, 00:05:39.278 "nop_timeout": 60, 00:05:39.278 "pdu_pool_size": 36864, 00:05:39.278 "require_chap": false 00:05:39.278 } 00:05:39.278 } 00:05:39.278 ] 00:05:39.278 } 00:05:39.278 ] 00:05:39.278 } 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60932 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 60932 ']' 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 60932 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60932 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.278 killing process with pid 60932 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60932' 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 60932 00:05:39.278 15:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 60932 00:05:39.537 15:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60958 00:05:39.537 15:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.537 15:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60958 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 60958 ']' 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 60958 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60958 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.803 killing process with pid 60958 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60958' 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 60958 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 60958 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:44.803 00:05:44.803 real 0m6.182s 00:05:44.803 user 0m5.917s 00:05:44.803 sys 0m0.439s 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.803 ************************************ 00:05:44.803 END TEST skip_rpc_with_json 00:05:44.803 ************************************ 00:05:44.803 15:28:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:44.803 15:28:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:44.803 15:28:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.803 15:28:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.803 15:28:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.803 ************************************ 00:05:44.803 START TEST skip_rpc_with_delay 00:05:44.803 ************************************ 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.803 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.804 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.804 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.804 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.804 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:44.804 15:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.062 [2024-07-15 15:28:39.992846] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:45.062 [2024-07-15 15:28:39.992997] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:45.062 15:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:45.062 15:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.062 15:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.062 15:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.062 00:05:45.062 real 0m0.104s 00:05:45.062 user 0m0.071s 00:05:45.062 sys 0m0.031s 00:05:45.062 15:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.062 15:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:45.062 ************************************ 00:05:45.062 END TEST skip_rpc_with_delay 00:05:45.062 ************************************ 00:05:45.062 15:28:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.062 15:28:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:45.062 15:28:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:45.062 15:28:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:45.062 15:28:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.062 15:28:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.062 15:28:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.062 ************************************ 00:05:45.062 START TEST exit_on_failed_rpc_init 00:05:45.062 ************************************ 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61062 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61062 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61062 ']' 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.062 15:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.062 [2024-07-15 15:28:40.143660] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:45.062 [2024-07-15 15:28:40.143766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61062 ] 00:05:45.320 [2024-07-15 15:28:40.284939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.320 [2024-07-15 15:28:40.357844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:46.256 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.256 [2024-07-15 15:28:41.271609] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:46.256 [2024-07-15 15:28:41.271742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61092 ] 00:05:46.515 [2024-07-15 15:28:41.407425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.515 [2024-07-15 15:28:41.480586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.515 [2024-07-15 15:28:41.480734] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:46.515 [2024-07-15 15:28:41.480761] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:46.515 [2024-07-15 15:28:41.480777] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61062 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61062 ']' 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61062 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61062 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.515 killing process with pid 61062 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61062' 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61062 00:05:46.515 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61062 00:05:46.774 00:05:46.774 real 0m1.789s 00:05:46.774 user 0m2.263s 00:05:46.774 sys 0m0.329s 00:05:46.774 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.774 15:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.774 ************************************ 00:05:46.774 END TEST exit_on_failed_rpc_init 00:05:46.774 ************************************ 00:05:46.774 15:28:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.774 15:28:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.774 00:05:46.774 real 0m13.633s 00:05:46.774 user 0m13.375s 00:05:46.774 sys 0m1.128s 00:05:46.774 15:28:41 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.774 15:28:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.774 ************************************ 00:05:46.774 END TEST skip_rpc 00:05:46.774 ************************************ 00:05:47.033 15:28:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.033 15:28:41 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:47.033 15:28:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.033 15:28:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.033 15:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:47.033 ************************************ 00:05:47.033 START TEST rpc_client 00:05:47.033 ************************************ 00:05:47.033 15:28:41 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:47.033 * Looking for test storage... 00:05:47.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:47.033 15:28:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:47.033 OK 00:05:47.033 15:28:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:47.033 00:05:47.033 real 0m0.096s 00:05:47.033 user 0m0.041s 00:05:47.033 sys 0m0.061s 00:05:47.033 15:28:42 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.033 ************************************ 00:05:47.033 END TEST rpc_client 00:05:47.033 15:28:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:47.033 ************************************ 00:05:47.033 15:28:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.033 15:28:42 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:47.033 15:28:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.033 15:28:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.033 15:28:42 -- common/autotest_common.sh@10 -- # set +x 00:05:47.033 ************************************ 00:05:47.033 START TEST json_config 00:05:47.033 ************************************ 00:05:47.033 15:28:42 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:47.033 15:28:42 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.033 15:28:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.292 15:28:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.292 15:28:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.292 15:28:42 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:47.292 15:28:42 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.292 15:28:42 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.292 15:28:42 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.292 15:28:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.292 15:28:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.292 15:28:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.292 15:28:42 json_config -- paths/export.sh@5 -- # export PATH 00:05:47.293 15:28:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.293 15:28:42 json_config -- nvmf/common.sh@47 -- # : 0 00:05:47.293 15:28:42 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:47.293 15:28:42 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:47.293 15:28:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.293 15:28:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.293 15:28:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.293 15:28:42 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:47.293 15:28:42 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:47.293 15:28:42 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:47.293 INFO: JSON configuration test init 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.293 15:28:42 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:47.293 15:28:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:47.293 15:28:42 json_config -- json_config/common.sh@10 -- # shift 00:05:47.293 Waiting for target to run... 00:05:47.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.293 15:28:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:47.293 15:28:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:47.293 15:28:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:47.293 15:28:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.293 15:28:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.293 15:28:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61210 00:05:47.293 15:28:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:47.293 15:28:42 json_config -- json_config/common.sh@25 -- # waitforlisten 61210 /var/tmp/spdk_tgt.sock 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@829 -- # '[' -z 61210 ']' 00:05:47.293 15:28:42 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.293 15:28:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.293 [2024-07-15 15:28:42.255394] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:47.293 [2024-07-15 15:28:42.255718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61210 ] 00:05:47.551 [2024-07-15 15:28:42.556397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.551 [2024-07-15 15:28:42.616046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.116 15:28:43 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.116 15:28:43 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:48.116 15:28:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:48.116 00:05:48.116 15:28:43 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:48.116 15:28:43 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:48.116 15:28:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.116 15:28:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.116 15:28:43 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:48.116 15:28:43 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:48.116 15:28:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.116 15:28:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.374 15:28:43 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:48.374 15:28:43 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:48.374 15:28:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:48.630 15:28:43 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:48.630 15:28:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:48.630 15:28:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.630 15:28:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.630 15:28:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:48.630 15:28:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:48.630 15:28:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:48.630 15:28:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:48.630 15:28:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:48.630 15:28:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:49.195 15:28:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.195 15:28:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:49.195 15:28:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.195 15:28:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:49.195 15:28:44 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.195 15:28:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.452 MallocForNvmf0 00:05:49.452 15:28:44 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.452 15:28:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.709 MallocForNvmf1 00:05:49.709 15:28:44 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:49.709 15:28:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:49.966 [2024-07-15 15:28:44.951822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.966 15:28:44 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.966 15:28:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.301 15:28:45 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.301 15:28:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.575 15:28:45 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.575 15:28:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.833 15:28:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:50.833 15:28:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:51.090 [2024-07-15 15:28:46.180350] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:51.090 15:28:46 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:51.090 15:28:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.090 15:28:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.349 15:28:46 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:51.349 15:28:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.349 15:28:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.349 15:28:46 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:51.349 15:28:46 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.349 15:28:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.608 MallocBdevForConfigChangeCheck 00:05:51.608 15:28:46 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:51.608 15:28:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.608 15:28:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.608 15:28:46 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:51.608 15:28:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.175 INFO: shutting down applications... 00:05:52.175 15:28:47 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:52.175 15:28:47 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:52.175 15:28:47 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:52.175 15:28:47 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:52.175 15:28:47 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:52.434 Calling clear_iscsi_subsystem 00:05:52.434 Calling clear_nvmf_subsystem 00:05:52.434 Calling clear_nbd_subsystem 00:05:52.434 Calling clear_ublk_subsystem 00:05:52.434 Calling clear_vhost_blk_subsystem 00:05:52.434 Calling clear_vhost_scsi_subsystem 00:05:52.434 Calling clear_bdev_subsystem 00:05:52.434 15:28:47 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:52.434 15:28:47 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:52.434 15:28:47 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:52.434 15:28:47 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.434 15:28:47 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:52.434 15:28:47 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:52.693 15:28:47 json_config -- json_config/json_config.sh@345 -- # break 00:05:52.693 15:28:47 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:52.693 15:28:47 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:52.693 15:28:47 json_config -- json_config/common.sh@31 -- # local app=target 00:05:52.693 15:28:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:52.693 15:28:47 json_config -- json_config/common.sh@35 -- # [[ -n 61210 ]] 00:05:52.693 15:28:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61210 00:05:52.693 15:28:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:52.693 15:28:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.693 15:28:47 json_config -- json_config/common.sh@41 -- # kill -0 61210 00:05:52.693 15:28:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.260 15:28:48 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.260 15:28:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.260 15:28:48 json_config -- json_config/common.sh@41 -- # kill -0 61210 00:05:53.260 15:28:48 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:53.260 15:28:48 json_config -- json_config/common.sh@43 -- # break 00:05:53.260 15:28:48 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:53.260 SPDK target shutdown done 00:05:53.260 15:28:48 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:53.260 INFO: relaunching applications... 00:05:53.260 15:28:48 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:53.260 15:28:48 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.260 15:28:48 json_config -- json_config/common.sh@9 -- # local app=target 00:05:53.260 15:28:48 json_config -- json_config/common.sh@10 -- # shift 00:05:53.260 15:28:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:53.260 15:28:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:53.260 15:28:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:53.260 15:28:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.260 15:28:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.260 15:28:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61491 00:05:53.260 Waiting for target to run... 00:05:53.260 15:28:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:53.260 15:28:48 json_config -- json_config/common.sh@25 -- # waitforlisten 61491 /var/tmp/spdk_tgt.sock 00:05:53.260 15:28:48 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.260 15:28:48 json_config -- common/autotest_common.sh@829 -- # '[' -z 61491 ']' 00:05:53.260 15:28:48 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.260 15:28:48 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.260 15:28:48 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.260 15:28:48 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.260 15:28:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.260 [2024-07-15 15:28:48.380518] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:53.260 [2024-07-15 15:28:48.380626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61491 ] 00:05:53.828 [2024-07-15 15:28:48.668200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.828 [2024-07-15 15:28:48.725851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.086 [2024-07-15 15:28:49.043332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.086 [2024-07-15 15:28:49.075451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.344 15:28:49 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.344 15:28:49 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:54.344 15:28:49 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.344 00:05:54.344 15:28:49 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:54.344 INFO: Checking if target configuration is the same... 00:05:54.344 15:28:49 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:54.344 15:28:49 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.344 15:28:49 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:54.344 15:28:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.344 + '[' 2 -ne 2 ']' 00:05:54.344 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:54.344 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:54.344 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:54.344 +++ basename /dev/fd/62 00:05:54.344 ++ mktemp /tmp/62.XXX 00:05:54.344 + tmp_file_1=/tmp/62.7eF 00:05:54.344 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.344 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.344 + tmp_file_2=/tmp/spdk_tgt_config.json.A88 00:05:54.344 + ret=0 00:05:54.344 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:54.911 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:54.911 + diff -u /tmp/62.7eF /tmp/spdk_tgt_config.json.A88 00:05:54.911 + echo 'INFO: JSON config files are the same' 00:05:54.911 INFO: JSON config files are the same 00:05:54.911 + rm /tmp/62.7eF /tmp/spdk_tgt_config.json.A88 00:05:54.911 + exit 0 00:05:54.911 15:28:49 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:54.911 15:28:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:54.911 INFO: changing configuration and checking if this can be detected... 00:05:54.911 15:28:49 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.911 15:28:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.168 15:28:50 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:55.168 15:28:50 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.168 15:28:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.168 + '[' 2 -ne 2 ']' 00:05:55.168 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:55.168 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:55.168 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:55.168 +++ basename /dev/fd/62 00:05:55.168 ++ mktemp /tmp/62.XXX 00:05:55.168 + tmp_file_1=/tmp/62.qnY 00:05:55.168 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.168 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.168 + tmp_file_2=/tmp/spdk_tgt_config.json.d8q 00:05:55.168 + ret=0 00:05:55.168 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.735 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.735 + diff -u /tmp/62.qnY /tmp/spdk_tgt_config.json.d8q 00:05:55.735 + ret=1 00:05:55.735 + echo '=== Start of file: /tmp/62.qnY ===' 00:05:55.735 + cat /tmp/62.qnY 00:05:55.735 + echo '=== End of file: /tmp/62.qnY ===' 00:05:55.735 + echo '' 00:05:55.735 + echo '=== Start of file: /tmp/spdk_tgt_config.json.d8q ===' 00:05:55.735 + cat /tmp/spdk_tgt_config.json.d8q 00:05:55.735 + echo '=== End of file: /tmp/spdk_tgt_config.json.d8q ===' 00:05:55.735 + echo '' 00:05:55.735 + rm /tmp/62.qnY /tmp/spdk_tgt_config.json.d8q 00:05:55.735 + exit 1 00:05:55.735 INFO: configuration change detected. 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@317 -- # [[ -n 61491 ]] 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.735 15:28:50 json_config -- json_config/json_config.sh@323 -- # killprocess 61491 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@948 -- # '[' -z 61491 ']' 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@952 -- # kill -0 61491 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@953 -- # uname 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61491 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.735 killing process with pid 61491 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61491' 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@967 -- # kill 61491 00:05:55.735 15:28:50 json_config -- common/autotest_common.sh@972 -- # wait 61491 00:05:56.007 15:28:50 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.007 15:28:50 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:56.007 15:28:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.007 15:28:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 15:28:51 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:56.007 INFO: Success 00:05:56.007 15:28:51 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:56.007 00:05:56.007 real 0m8.912s 00:05:56.007 user 0m13.353s 00:05:56.007 sys 0m1.550s 00:05:56.007 15:28:51 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.007 15:28:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 ************************************ 00:05:56.007 END TEST json_config 00:05:56.007 ************************************ 00:05:56.007 15:28:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.007 15:28:51 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.007 15:28:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.007 15:28:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.007 15:28:51 -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 ************************************ 00:05:56.007 START TEST json_config_extra_key 00:05:56.007 ************************************ 00:05:56.007 15:28:51 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.007 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.007 15:28:51 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.007 15:28:51 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.008 15:28:51 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.008 15:28:51 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.008 15:28:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.008 15:28:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.008 15:28:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.008 15:28:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:56.009 15:28:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.009 15:28:51 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:56.011 15:28:51 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.012 15:28:51 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.012 15:28:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.012 15:28:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.012 15:28:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.012 15:28:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.012 15:28:51 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.012 15:28:51 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.012 INFO: launching applications... 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:56.012 15:28:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.012 15:28:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:56.012 15:28:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:56.012 15:28:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.012 15:28:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.012 15:28:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.012 15:28:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.012 15:28:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.277 15:28:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61667 00:05:56.277 15:28:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.277 Waiting for target to run... 00:05:56.277 15:28:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.277 15:28:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61667 /var/tmp/spdk_tgt.sock 00:05:56.277 15:28:51 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61667 ']' 00:05:56.277 15:28:51 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.277 15:28:51 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.277 15:28:51 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.277 15:28:51 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.277 15:28:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.277 [2024-07-15 15:28:51.199305] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:56.277 [2024-07-15 15:28:51.199410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61667 ] 00:05:56.536 [2024-07-15 15:28:51.505599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.536 [2024-07-15 15:28:51.561020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.104 15:28:52 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.104 15:28:52 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:57.104 15:28:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.104 00:05:57.104 INFO: shutting down applications... 00:05:57.104 15:28:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.104 15:28:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.104 15:28:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.104 15:28:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.104 15:28:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61667 ]] 00:05:57.104 15:28:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61667 00:05:57.104 15:28:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.104 15:28:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.104 15:28:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61667 00:05:57.104 15:28:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:57.680 15:28:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:57.680 15:28:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.680 15:28:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61667 00:05:57.680 15:28:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:57.680 15:28:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:57.680 15:28:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:57.680 SPDK target shutdown done 00:05:57.680 15:28:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:57.680 Success 00:05:57.680 15:28:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:57.680 00:05:57.680 real 0m1.680s 00:05:57.680 user 0m1.620s 00:05:57.680 sys 0m0.297s 00:05:57.680 15:28:52 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.680 15:28:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.680 ************************************ 00:05:57.680 END TEST json_config_extra_key 00:05:57.680 ************************************ 00:05:57.680 15:28:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.680 15:28:52 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.680 15:28:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.680 15:28:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.680 15:28:52 -- common/autotest_common.sh@10 -- # set +x 00:05:57.680 ************************************ 00:05:57.680 START TEST alias_rpc 00:05:57.680 ************************************ 00:05:57.680 15:28:52 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.938 * Looking for test storage... 00:05:57.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:57.938 15:28:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.938 15:28:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61744 00:05:57.938 15:28:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.938 15:28:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61744 00:05:57.938 15:28:52 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61744 ']' 00:05:57.938 15:28:52 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.938 15:28:52 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.938 15:28:52 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.938 15:28:52 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.938 15:28:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.938 [2024-07-15 15:28:52.923646] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:57.938 [2024-07-15 15:28:52.923744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61744 ] 00:05:57.938 [2024-07-15 15:28:53.060346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.196 [2024-07-15 15:28:53.130131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.196 15:28:53 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.196 15:28:53 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.196 15:28:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:58.762 15:28:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61744 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61744 ']' 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61744 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61744 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.762 killing process with pid 61744 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61744' 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@967 -- # kill 61744 00:05:58.762 15:28:53 alias_rpc -- common/autotest_common.sh@972 -- # wait 61744 00:05:59.020 00:05:59.020 real 0m1.124s 00:05:59.020 user 0m1.273s 00:05:59.020 sys 0m0.329s 00:05:59.020 15:28:53 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.020 ************************************ 00:05:59.020 END TEST alias_rpc 00:05:59.020 ************************************ 00:05:59.020 15:28:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.020 15:28:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.020 15:28:53 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:59.020 15:28:53 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:59.020 15:28:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.020 15:28:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.020 15:28:53 -- common/autotest_common.sh@10 -- # set +x 00:05:59.020 ************************************ 00:05:59.020 START TEST dpdk_mem_utility 00:05:59.020 ************************************ 00:05:59.020 15:28:53 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:59.020 * Looking for test storage... 00:05:59.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:59.020 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:59.020 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61817 00:05:59.020 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61817 00:05:59.020 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.020 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61817 ']' 00:05:59.020 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.020 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.020 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.020 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.020 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.020 [2024-07-15 15:28:54.105264] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:59.020 [2024-07-15 15:28:54.105375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61817 ] 00:05:59.276 [2024-07-15 15:28:54.238311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.276 [2024-07-15 15:28:54.296755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.534 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.534 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:59.535 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.535 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.535 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.535 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.535 { 00:05:59.535 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.535 } 00:05:59.535 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.535 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:59.535 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:59.535 1 heaps totaling size 814.000000 MiB 00:05:59.535 size: 814.000000 MiB heap id: 0 00:05:59.535 end heaps---------- 00:05:59.535 8 mempools totaling size 598.116089 MiB 00:05:59.535 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.535 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.535 size: 84.521057 MiB name: bdev_io_61817 00:05:59.535 size: 51.011292 MiB name: evtpool_61817 00:05:59.535 size: 50.003479 MiB name: msgpool_61817 00:05:59.535 size: 21.763794 MiB name: PDU_Pool 00:05:59.535 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.535 size: 0.026123 MiB name: Session_Pool 00:05:59.535 end mempools------- 00:05:59.535 6 memzones totaling size 4.142822 MiB 00:05:59.535 size: 1.000366 MiB name: RG_ring_0_61817 00:05:59.535 size: 1.000366 MiB name: RG_ring_1_61817 00:05:59.535 size: 1.000366 MiB name: RG_ring_4_61817 00:05:59.535 size: 1.000366 MiB name: RG_ring_5_61817 00:05:59.535 size: 0.125366 MiB name: RG_ring_2_61817 00:05:59.535 size: 0.015991 MiB name: RG_ring_3_61817 00:05:59.535 end memzones------- 00:05:59.535 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:59.535 heap id: 0 total size: 814.000000 MiB number of busy elements: 234 number of free elements: 15 00:05:59.535 list of free elements. size: 12.484009 MiB 00:05:59.535 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:59.535 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:59.535 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:59.535 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:59.535 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:59.535 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:59.535 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:59.535 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:59.535 element at address: 0x200000200000 with size: 0.836853 MiB 00:05:59.535 element at address: 0x20001aa00000 with size: 0.571167 MiB 00:05:59.535 element at address: 0x20000b200000 with size: 0.489258 MiB 00:05:59.535 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:59.535 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:59.535 element at address: 0x200027e00000 with size: 0.398315 MiB 00:05:59.535 element at address: 0x200003a00000 with size: 0.350769 MiB 00:05:59.535 list of standard malloc elements. size: 199.253418 MiB 00:05:59.535 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:59.535 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:59.535 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:59.535 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:59.535 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:59.535 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:59.535 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:59.535 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:59.535 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:59.535 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:59.535 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:59.535 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:59.536 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:59.536 list of memzone associated elements. size: 602.262573 MiB 00:05:59.536 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:59.536 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:59.536 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:59.536 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:59.536 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:59.536 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61817_0 00:05:59.536 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:59.536 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61817_0 00:05:59.536 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:59.536 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61817_0 00:05:59.536 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:59.536 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:59.536 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:59.536 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:59.536 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:59.536 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61817 00:05:59.536 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:59.536 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61817 00:05:59.536 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:59.536 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61817 00:05:59.536 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:59.536 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:59.536 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:59.536 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:59.536 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:59.536 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:59.536 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:59.536 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:59.536 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:59.536 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61817 00:05:59.536 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:59.536 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61817 00:05:59.536 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:59.536 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61817 00:05:59.536 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:59.536 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61817 00:05:59.536 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:59.536 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61817 00:05:59.536 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:59.536 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:59.536 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:59.536 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:59.536 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:59.536 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:59.536 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:59.536 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61817 00:05:59.536 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:59.536 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:59.536 element at address: 0x200027e66100 with size: 0.023743 MiB 00:05:59.536 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:59.536 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:59.536 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61817 00:05:59.536 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:05:59.536 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:59.536 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:59.536 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61817 00:05:59.536 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:59.536 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61817 00:05:59.536 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:05:59.536 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:59.536 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:59.536 15:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61817 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61817 ']' 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61817 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61817 00:05:59.536 killing process with pid 61817 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61817' 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61817 00:05:59.536 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61817 00:05:59.793 00:05:59.793 real 0m0.912s 00:05:59.793 user 0m0.970s 00:05:59.793 sys 0m0.283s 00:05:59.793 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.793 15:28:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.793 ************************************ 00:05:59.793 END TEST dpdk_mem_utility 00:05:59.793 ************************************ 00:05:59.793 15:28:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.794 15:28:54 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:59.794 15:28:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.794 15:28:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.794 15:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:59.794 ************************************ 00:05:59.794 START TEST event 00:05:59.794 ************************************ 00:05:59.794 15:28:54 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:00.052 * Looking for test storage... 00:06:00.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:00.052 15:28:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:00.052 15:28:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.052 15:28:55 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.052 15:28:55 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:00.052 15:28:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.052 15:28:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.052 ************************************ 00:06:00.052 START TEST event_perf 00:06:00.052 ************************************ 00:06:00.052 15:28:55 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.052 Running I/O for 1 seconds...[2024-07-15 15:28:55.027505] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:00.052 [2024-07-15 15:28:55.027614] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61893 ] 00:06:00.052 [2024-07-15 15:28:55.165034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.309 Running I/O for 1 seconds...[2024-07-15 15:28:55.226220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.310 [2024-07-15 15:28:55.226340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.310 [2024-07-15 15:28:55.226464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.310 [2024-07-15 15:28:55.226464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.243 00:06:01.243 lcore 0: 203257 00:06:01.243 lcore 1: 203258 00:06:01.243 lcore 2: 203260 00:06:01.243 lcore 3: 203259 00:06:01.243 done. 00:06:01.243 00:06:01.243 real 0m1.290s 00:06:01.243 user 0m4.104s 00:06:01.243 sys 0m0.042s 00:06:01.243 15:28:56 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.243 15:28:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.243 ************************************ 00:06:01.243 END TEST event_perf 00:06:01.243 ************************************ 00:06:01.243 15:28:56 event -- common/autotest_common.sh@1142 -- # return 0 00:06:01.243 15:28:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:01.243 15:28:56 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:01.243 15:28:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.243 15:28:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.243 ************************************ 00:06:01.243 START TEST event_reactor 00:06:01.243 ************************************ 00:06:01.243 15:28:56 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:01.243 [2024-07-15 15:28:56.365188] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:01.243 [2024-07-15 15:28:56.365469] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61931 ] 00:06:01.501 [2024-07-15 15:28:56.500110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.501 [2024-07-15 15:28:56.559666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.875 test_start 00:06:02.875 oneshot 00:06:02.875 tick 100 00:06:02.875 tick 100 00:06:02.875 tick 250 00:06:02.875 tick 100 00:06:02.875 tick 100 00:06:02.875 tick 100 00:06:02.875 tick 250 00:06:02.875 tick 500 00:06:02.875 tick 100 00:06:02.875 tick 100 00:06:02.875 tick 250 00:06:02.875 tick 100 00:06:02.875 tick 100 00:06:02.875 test_end 00:06:02.875 00:06:02.875 real 0m1.285s 00:06:02.875 user 0m1.134s 00:06:02.875 sys 0m0.043s 00:06:02.875 ************************************ 00:06:02.875 END TEST event_reactor 00:06:02.875 ************************************ 00:06:02.875 15:28:57 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.875 15:28:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:02.875 15:28:57 event -- common/autotest_common.sh@1142 -- # return 0 00:06:02.875 15:28:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:02.875 15:28:57 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:02.876 15:28:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.876 15:28:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.876 ************************************ 00:06:02.876 START TEST event_reactor_perf 00:06:02.876 ************************************ 00:06:02.876 15:28:57 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:02.876 [2024-07-15 15:28:57.695956] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:02.876 [2024-07-15 15:28:57.696076] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61961 ] 00:06:02.876 [2024-07-15 15:28:57.831830] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.876 [2024-07-15 15:28:57.900450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.272 test_start 00:06:04.272 test_end 00:06:04.272 Performance: 335521 events per second 00:06:04.272 00:06:04.272 real 0m1.292s 00:06:04.272 user 0m1.148s 00:06:04.272 sys 0m0.037s 00:06:04.272 15:28:58 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.272 ************************************ 00:06:04.272 END TEST event_reactor_perf 00:06:04.272 ************************************ 00:06:04.272 15:28:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.272 15:28:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:04.272 15:28:59 event -- event/event.sh@49 -- # uname -s 00:06:04.272 15:28:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.272 15:28:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:04.272 15:28:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.272 15:28:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.272 15:28:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.272 ************************************ 00:06:04.272 START TEST event_scheduler 00:06:04.272 ************************************ 00:06:04.272 15:28:59 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:04.272 * Looking for test storage... 00:06:04.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:04.272 15:28:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:04.272 15:28:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62023 00:06:04.272 15:28:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:04.272 15:28:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.272 15:28:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62023 00:06:04.272 15:28:59 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62023 ']' 00:06:04.272 15:28:59 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.272 15:28:59 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.272 15:28:59 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.272 15:28:59 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.272 15:28:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.272 [2024-07-15 15:28:59.166673] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:04.272 [2024-07-15 15:28:59.166769] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62023 ] 00:06:04.272 [2024-07-15 15:28:59.305245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.551 [2024-07-15 15:28:59.388552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.551 [2024-07-15 15:28:59.388616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.551 [2024-07-15 15:28:59.388694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.551 [2024-07-15 15:28:59.388706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:05.115 15:29:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.115 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.115 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.115 POWER: Cannot set governor of lcore 0 to performance 00:06:05.115 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.115 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.115 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.115 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.115 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:05.115 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:05.115 POWER: Unable to set Power Management Environment for lcore 0 00:06:05.115 [2024-07-15 15:29:00.107434] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:05.115 [2024-07-15 15:29:00.107557] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:05.115 [2024-07-15 15:29:00.107676] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:05.115 [2024-07-15 15:29:00.107783] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:05.115 [2024-07-15 15:29:00.107884] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:05.115 [2024-07-15 15:29:00.108011] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 [2024-07-15 15:29:00.160778] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 ************************************ 00:06:05.115 START TEST scheduler_create_thread 00:06:05.115 ************************************ 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 2 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 3 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 4 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 5 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 6 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 7 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 8 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 9 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.115 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.372 10 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.372 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.937 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.937 00:06:05.937 real 0m0.590s 00:06:05.937 user 0m0.015s 00:06:05.937 sys 0m0.006s 00:06:05.937 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.937 15:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.937 ************************************ 00:06:05.937 END TEST scheduler_create_thread 00:06:05.937 ************************************ 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:05.937 15:29:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:05.937 15:29:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62023 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62023 ']' 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62023 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62023 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62023' 00:06:05.937 killing process with pid 62023 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62023 00:06:05.937 15:29:00 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62023 00:06:06.201 [2024-07-15 15:29:01.242781] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:06.459 00:06:06.459 real 0m2.371s 00:06:06.459 user 0m4.856s 00:06:06.459 sys 0m0.282s 00:06:06.459 15:29:01 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.459 ************************************ 00:06:06.459 END TEST event_scheduler 00:06:06.459 ************************************ 00:06:06.459 15:29:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.459 15:29:01 event -- common/autotest_common.sh@1142 -- # return 0 00:06:06.459 15:29:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:06.459 15:29:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:06.459 15:29:01 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.459 15:29:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.459 15:29:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.459 ************************************ 00:06:06.459 START TEST app_repeat 00:06:06.459 ************************************ 00:06:06.459 15:29:01 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62113 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:06.459 Process app_repeat pid: 62113 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62113' 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.459 spdk_app_start Round 0 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:06.459 15:29:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62113 /var/tmp/spdk-nbd.sock 00:06:06.459 15:29:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62113 ']' 00:06:06.459 15:29:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.459 15:29:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.459 15:29:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.459 15:29:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.459 15:29:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.459 [2024-07-15 15:29:01.480259] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:06.459 [2024-07-15 15:29:01.480361] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62113 ] 00:06:06.716 [2024-07-15 15:29:01.618894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.716 [2024-07-15 15:29:01.691563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.716 [2024-07-15 15:29:01.691596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.716 15:29:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.716 15:29:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:06.716 15:29:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.974 Malloc0 00:06:06.974 15:29:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.232 Malloc1 00:06:07.232 15:29:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.232 15:29:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.797 /dev/nbd0 00:06:07.797 15:29:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.797 15:29:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.797 1+0 records in 00:06:07.797 1+0 records out 00:06:07.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283502 s, 14.4 MB/s 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.797 15:29:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:07.797 15:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.797 15:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.797 15:29:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.056 /dev/nbd1 00:06:08.056 15:29:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.056 15:29:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.056 1+0 records in 00:06:08.056 1+0 records out 00:06:08.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318318 s, 12.9 MB/s 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:08.056 15:29:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:08.056 15:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.056 15:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.056 15:29:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.056 15:29:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.056 15:29:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.330 { 00:06:08.330 "bdev_name": "Malloc0", 00:06:08.330 "nbd_device": "/dev/nbd0" 00:06:08.330 }, 00:06:08.330 { 00:06:08.330 "bdev_name": "Malloc1", 00:06:08.330 "nbd_device": "/dev/nbd1" 00:06:08.330 } 00:06:08.330 ]' 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.330 { 00:06:08.330 "bdev_name": "Malloc0", 00:06:08.330 "nbd_device": "/dev/nbd0" 00:06:08.330 }, 00:06:08.330 { 00:06:08.330 "bdev_name": "Malloc1", 00:06:08.330 "nbd_device": "/dev/nbd1" 00:06:08.330 } 00:06:08.330 ]' 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.330 /dev/nbd1' 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.330 /dev/nbd1' 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.330 256+0 records in 00:06:08.330 256+0 records out 00:06:08.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00774188 s, 135 MB/s 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.330 256+0 records in 00:06:08.330 256+0 records out 00:06:08.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263669 s, 39.8 MB/s 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.330 256+0 records in 00:06:08.330 256+0 records out 00:06:08.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308248 s, 34.0 MB/s 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.330 15:29:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.589 15:29:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.847 15:29:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.105 15:29:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.105 15:29:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.105 15:29:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.105 15:29:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.105 15:29:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.105 15:29:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.105 15:29:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.105 15:29:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.105 15:29:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.105 15:29:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.105 15:29:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.363 15:29:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.363 15:29:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.622 15:29:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.622 [2024-07-15 15:29:04.692430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.622 [2024-07-15 15:29:04.748360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.622 [2024-07-15 15:29:04.748373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.880 [2024-07-15 15:29:04.779322] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.880 [2024-07-15 15:29:04.779400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.189 15:29:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.189 spdk_app_start Round 1 00:06:13.189 15:29:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:13.189 15:29:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62113 /var/tmp/spdk-nbd.sock 00:06:13.189 15:29:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62113 ']' 00:06:13.189 15:29:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.189 15:29:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.189 15:29:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.189 15:29:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.189 15:29:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.189 15:29:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.189 15:29:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:13.189 15:29:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.189 Malloc0 00:06:13.189 15:29:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.448 Malloc1 00:06:13.448 15:29:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.448 15:29:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.706 /dev/nbd0 00:06:13.706 15:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.706 15:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.706 1+0 records in 00:06:13.706 1+0 records out 00:06:13.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220338 s, 18.6 MB/s 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.706 15:29:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:13.706 15:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.706 15:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.706 15:29:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.964 /dev/nbd1 00:06:13.964 15:29:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.964 15:29:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.964 1+0 records in 00:06:13.964 1+0 records out 00:06:13.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205702 s, 19.9 MB/s 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.964 15:29:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:13.964 15:29:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.964 15:29:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.964 15:29:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.964 15:29:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.964 15:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.223 15:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.223 { 00:06:14.223 "bdev_name": "Malloc0", 00:06:14.223 "nbd_device": "/dev/nbd0" 00:06:14.223 }, 00:06:14.223 { 00:06:14.223 "bdev_name": "Malloc1", 00:06:14.223 "nbd_device": "/dev/nbd1" 00:06:14.223 } 00:06:14.223 ]' 00:06:14.223 15:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.223 { 00:06:14.223 "bdev_name": "Malloc0", 00:06:14.223 "nbd_device": "/dev/nbd0" 00:06:14.223 }, 00:06:14.223 { 00:06:14.223 "bdev_name": "Malloc1", 00:06:14.223 "nbd_device": "/dev/nbd1" 00:06:14.223 } 00:06:14.223 ]' 00:06:14.223 15:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.480 /dev/nbd1' 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.480 /dev/nbd1' 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.480 256+0 records in 00:06:14.480 256+0 records out 00:06:14.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103025 s, 102 MB/s 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.480 256+0 records in 00:06:14.480 256+0 records out 00:06:14.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256823 s, 40.8 MB/s 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.480 256+0 records in 00:06:14.480 256+0 records out 00:06:14.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282078 s, 37.2 MB/s 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.480 15:29:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.738 15:29:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.997 15:29:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.255 15:29:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.513 15:29:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.513 15:29:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.513 15:29:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.771 15:29:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.771 [2024-07-15 15:29:10.808778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.771 [2024-07-15 15:29:10.867604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.771 [2024-07-15 15:29:10.867613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.771 [2024-07-15 15:29:10.898480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.771 [2024-07-15 15:29:10.898555] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.108 15:29:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.108 spdk_app_start Round 2 00:06:19.108 15:29:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:19.108 15:29:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62113 /var/tmp/spdk-nbd.sock 00:06:19.108 15:29:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62113 ']' 00:06:19.108 15:29:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.108 15:29:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.108 15:29:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.108 15:29:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.108 15:29:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.108 15:29:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.108 15:29:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:19.108 15:29:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.108 Malloc0 00:06:19.108 15:29:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.366 Malloc1 00:06:19.624 15:29:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.624 15:29:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.882 /dev/nbd0 00:06:19.882 15:29:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.882 15:29:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.882 15:29:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:19.882 15:29:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.882 15:29:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.882 15:29:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.882 15:29:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:19.882 15:29:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.882 15:29:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.882 15:29:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.882 15:29:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.882 1+0 records in 00:06:19.882 1+0 records out 00:06:19.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290399 s, 14.1 MB/s 00:06:19.883 15:29:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.883 15:29:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.883 15:29:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.883 15:29:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.883 15:29:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.883 15:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.883 15:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.883 15:29:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.142 /dev/nbd1 00:06:20.142 15:29:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.142 15:29:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.142 1+0 records in 00:06:20.142 1+0 records out 00:06:20.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320201 s, 12.8 MB/s 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:20.142 15:29:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:20.142 15:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.142 15:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.142 15:29:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.142 15:29:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.142 15:29:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.400 { 00:06:20.400 "bdev_name": "Malloc0", 00:06:20.400 "nbd_device": "/dev/nbd0" 00:06:20.400 }, 00:06:20.400 { 00:06:20.400 "bdev_name": "Malloc1", 00:06:20.400 "nbd_device": "/dev/nbd1" 00:06:20.400 } 00:06:20.400 ]' 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.400 { 00:06:20.400 "bdev_name": "Malloc0", 00:06:20.400 "nbd_device": "/dev/nbd0" 00:06:20.400 }, 00:06:20.400 { 00:06:20.400 "bdev_name": "Malloc1", 00:06:20.400 "nbd_device": "/dev/nbd1" 00:06:20.400 } 00:06:20.400 ]' 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.400 /dev/nbd1' 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.400 /dev/nbd1' 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.400 256+0 records in 00:06:20.400 256+0 records out 00:06:20.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709874 s, 148 MB/s 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.400 256+0 records in 00:06:20.400 256+0 records out 00:06:20.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280113 s, 37.4 MB/s 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.400 15:29:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.659 256+0 records in 00:06:20.659 256+0 records out 00:06:20.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273421 s, 38.4 MB/s 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.659 15:29:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.917 15:29:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.917 15:29:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.917 15:29:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.917 15:29:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.917 15:29:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.917 15:29:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.917 15:29:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.917 15:29:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.918 15:29:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.918 15:29:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.176 15:29:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.435 15:29:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.435 15:29:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.694 15:29:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.953 [2024-07-15 15:29:16.861298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.953 [2024-07-15 15:29:16.920842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.953 [2024-07-15 15:29:16.920853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.953 [2024-07-15 15:29:16.950603] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.953 [2024-07-15 15:29:16.950666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.256 15:29:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62113 /var/tmp/spdk-nbd.sock 00:06:25.256 15:29:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62113 ']' 00:06:25.256 15:29:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.256 15:29:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.256 15:29:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.256 15:29:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.256 15:29:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:25.256 15:29:20 event.app_repeat -- event/event.sh@39 -- # killprocess 62113 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62113 ']' 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62113 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62113 00:06:25.256 killing process with pid 62113 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62113' 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62113 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62113 00:06:25.256 spdk_app_start is called in Round 0. 00:06:25.256 Shutdown signal received, stop current app iteration 00:06:25.256 Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 reinitialization... 00:06:25.256 spdk_app_start is called in Round 1. 00:06:25.256 Shutdown signal received, stop current app iteration 00:06:25.256 Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 reinitialization... 00:06:25.256 spdk_app_start is called in Round 2. 00:06:25.256 Shutdown signal received, stop current app iteration 00:06:25.256 Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 reinitialization... 00:06:25.256 spdk_app_start is called in Round 3. 00:06:25.256 Shutdown signal received, stop current app iteration 00:06:25.256 15:29:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:25.256 15:29:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:25.256 00:06:25.256 real 0m18.756s 00:06:25.256 user 0m42.725s 00:06:25.256 sys 0m2.863s 00:06:25.256 ************************************ 00:06:25.256 END TEST app_repeat 00:06:25.256 ************************************ 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.256 15:29:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.256 15:29:20 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.256 15:29:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:25.256 15:29:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:25.256 15:29:20 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.256 15:29:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.256 15:29:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.256 ************************************ 00:06:25.256 START TEST cpu_locks 00:06:25.256 ************************************ 00:06:25.256 15:29:20 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:25.256 * Looking for test storage... 00:06:25.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:25.256 15:29:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:25.256 15:29:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:25.256 15:29:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:25.256 15:29:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:25.256 15:29:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.256 15:29:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.256 15:29:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.256 ************************************ 00:06:25.256 START TEST default_locks 00:06:25.256 ************************************ 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62736 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62736 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62736 ']' 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.256 15:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.514 [2024-07-15 15:29:20.412466] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:25.514 [2024-07-15 15:29:20.412596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62736 ] 00:06:25.514 [2024-07-15 15:29:20.549060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.514 [2024-07-15 15:29:20.619623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.445 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.445 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:26.445 15:29:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62736 00:06:26.445 15:29:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62736 00:06:26.445 15:29:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.703 15:29:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62736 00:06:26.703 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62736 ']' 00:06:26.703 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62736 00:06:26.703 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:26.703 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.703 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62736 00:06:26.960 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.960 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.960 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62736' 00:06:26.960 killing process with pid 62736 00:06:26.960 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62736 00:06:26.960 15:29:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62736 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62736 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62736 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62736 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62736 ']' 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.219 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62736) - No such process 00:06:27.219 ERROR: process (pid: 62736) is no longer running 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.219 00:06:27.219 real 0m1.760s 00:06:27.219 user 0m2.006s 00:06:27.219 sys 0m0.469s 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.219 ************************************ 00:06:27.219 END TEST default_locks 00:06:27.219 ************************************ 00:06:27.219 15:29:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.219 15:29:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:27.219 15:29:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:27.219 15:29:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.219 15:29:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.219 15:29:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.219 ************************************ 00:06:27.219 START TEST default_locks_via_rpc 00:06:27.219 ************************************ 00:06:27.219 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:27.219 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62789 00:06:27.219 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62789 00:06:27.219 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.219 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62789 ']' 00:06:27.219 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.219 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.219 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.220 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.220 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.220 [2024-07-15 15:29:22.206188] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:27.220 [2024-07-15 15:29:22.206272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62789 ] 00:06:27.220 [2024-07-15 15:29:22.338922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.477 [2024-07-15 15:29:22.398910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62789 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62789 00:06:27.477 15:29:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62789 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62789 ']' 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62789 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62789 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.044 killing process with pid 62789 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62789' 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62789 00:06:28.044 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62789 00:06:28.301 00:06:28.301 real 0m1.130s 00:06:28.301 user 0m1.204s 00:06:28.301 sys 0m0.429s 00:06:28.301 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.301 15:29:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.301 ************************************ 00:06:28.301 END TEST default_locks_via_rpc 00:06:28.301 ************************************ 00:06:28.301 15:29:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:28.301 15:29:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.301 15:29:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.301 15:29:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.301 15:29:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.301 ************************************ 00:06:28.301 START TEST non_locking_app_on_locked_coremask 00:06:28.301 ************************************ 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62839 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62839 /var/tmp/spdk.sock 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62839 ']' 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.301 15:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.301 [2024-07-15 15:29:23.417718] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:28.301 [2024-07-15 15:29:23.417814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62839 ] 00:06:28.557 [2024-07-15 15:29:23.555299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.557 [2024-07-15 15:29:23.612940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62867 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62867 /var/tmp/spdk2.sock 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62867 ']' 00:06:29.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.491 15:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:29.491 [2024-07-15 15:29:24.474993] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:29.491 [2024-07-15 15:29:24.475087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62867 ] 00:06:29.491 [2024-07-15 15:29:24.617128] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.491 [2024-07-15 15:29:24.617191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.755 [2024-07-15 15:29:24.732623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.702 15:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.702 15:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:30.702 15:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62839 00:06:30.702 15:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62839 00:06:30.702 15:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62839 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62839 ']' 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62839 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62839 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.269 killing process with pid 62839 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62839' 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62839 00:06:31.269 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62839 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62867 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62867 ']' 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62867 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62867 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.836 killing process with pid 62867 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62867' 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62867 00:06:31.836 15:29:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62867 00:06:32.094 00:06:32.094 real 0m3.745s 00:06:32.094 user 0m4.473s 00:06:32.094 sys 0m0.921s 00:06:32.094 15:29:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.094 15:29:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.094 ************************************ 00:06:32.094 END TEST non_locking_app_on_locked_coremask 00:06:32.094 ************************************ 00:06:32.094 15:29:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:32.094 15:29:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.094 15:29:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.094 15:29:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.094 15:29:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.094 ************************************ 00:06:32.094 START TEST locking_app_on_unlocked_coremask 00:06:32.094 ************************************ 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62946 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62946 /var/tmp/spdk.sock 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62946 ']' 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.094 15:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.094 [2024-07-15 15:29:27.214284] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:32.094 [2024-07-15 15:29:27.214404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62946 ] 00:06:32.352 [2024-07-15 15:29:27.346019] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.352 [2024-07-15 15:29:27.346074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.352 [2024-07-15 15:29:27.404131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62974 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62974 /var/tmp/spdk2.sock 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62974 ']' 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.286 15:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.286 [2024-07-15 15:29:28.186733] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:33.286 [2024-07-15 15:29:28.186822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62974 ] 00:06:33.286 [2024-07-15 15:29:28.329984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.544 [2024-07-15 15:29:28.444865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.110 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.110 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:34.110 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62974 00:06:34.110 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.110 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62974 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62946 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62946 ']' 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 62946 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62946 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.046 killing process with pid 62946 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62946' 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 62946 00:06:35.046 15:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 62946 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62974 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62974 ']' 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 62974 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62974 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.621 killing process with pid 62974 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62974' 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 62974 00:06:35.621 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 62974 00:06:35.878 00:06:35.878 real 0m3.641s 00:06:35.878 user 0m4.309s 00:06:35.878 sys 0m0.883s 00:06:35.878 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.878 15:29:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.878 ************************************ 00:06:35.878 END TEST locking_app_on_unlocked_coremask 00:06:35.878 ************************************ 00:06:35.878 15:29:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:35.878 15:29:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.878 15:29:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.878 15:29:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.878 15:29:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.878 ************************************ 00:06:35.878 START TEST locking_app_on_locked_coremask 00:06:35.878 ************************************ 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63053 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63053 /var/tmp/spdk.sock 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63053 ']' 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.879 15:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.879 [2024-07-15 15:29:30.890335] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:35.879 [2024-07-15 15:29:30.890423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63053 ] 00:06:36.137 [2024-07-15 15:29:31.028085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.137 [2024-07-15 15:29:31.104906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63081 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63081 /var/tmp/spdk2.sock 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63081 /var/tmp/spdk2.sock 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63081 /var/tmp/spdk2.sock 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63081 ']' 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.071 15:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.071 [2024-07-15 15:29:31.961498] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:37.071 [2024-07-15 15:29:31.961617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63081 ] 00:06:37.071 [2024-07-15 15:29:32.107939] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63053 has claimed it. 00:06:37.071 [2024-07-15 15:29:32.108009] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.637 ERROR: process (pid: 63081) is no longer running 00:06:37.637 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63081) - No such process 00:06:37.637 15:29:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.637 15:29:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:37.637 15:29:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:37.637 15:29:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.637 15:29:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.637 15:29:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.637 15:29:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63053 00:06:37.638 15:29:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63053 00:06:37.638 15:29:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63053 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63053 ']' 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63053 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63053 00:06:38.205 killing process with pid 63053 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63053' 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63053 00:06:38.205 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63053 00:06:38.463 ************************************ 00:06:38.463 END TEST locking_app_on_locked_coremask 00:06:38.463 ************************************ 00:06:38.463 00:06:38.463 real 0m2.620s 00:06:38.463 user 0m3.209s 00:06:38.463 sys 0m0.574s 00:06:38.463 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.463 15:29:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.463 15:29:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:38.463 15:29:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:38.463 15:29:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.463 15:29:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.463 15:29:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.463 ************************************ 00:06:38.463 START TEST locking_overlapped_coremask 00:06:38.463 ************************************ 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63127 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63127 /var/tmp/spdk.sock 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63127 ']' 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.463 15:29:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.463 [2024-07-15 15:29:33.559037] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:38.463 [2024-07-15 15:29:33.559133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63127 ] 00:06:38.722 [2024-07-15 15:29:33.697119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.722 [2024-07-15 15:29:33.767607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.722 [2024-07-15 15:29:33.767745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.722 [2024-07-15 15:29:33.767753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63157 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63157 /var/tmp/spdk2.sock 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63157 /var/tmp/spdk2.sock 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63157 /var/tmp/spdk2.sock 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63157 ']' 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.656 15:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.656 [2024-07-15 15:29:34.643637] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:39.656 [2024-07-15 15:29:34.644460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63157 ] 00:06:39.915 [2024-07-15 15:29:34.792442] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63127 has claimed it. 00:06:39.915 [2024-07-15 15:29:34.792508] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.482 ERROR: process (pid: 63157) is no longer running 00:06:40.482 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63157) - No such process 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63127 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63127 ']' 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63127 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63127 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63127' 00:06:40.482 killing process with pid 63127 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63127 00:06:40.482 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63127 00:06:40.741 00:06:40.741 real 0m2.130s 00:06:40.741 user 0m6.154s 00:06:40.741 sys 0m0.340s 00:06:40.741 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.741 15:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.741 ************************************ 00:06:40.741 END TEST locking_overlapped_coremask 00:06:40.741 ************************************ 00:06:40.741 15:29:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:40.741 15:29:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:40.741 15:29:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.741 15:29:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.741 15:29:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.741 ************************************ 00:06:40.741 START TEST locking_overlapped_coremask_via_rpc 00:06:40.741 ************************************ 00:06:40.741 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:40.741 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63209 00:06:40.742 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63209 /var/tmp/spdk.sock 00:06:40.742 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:40.742 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63209 ']' 00:06:40.742 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.742 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.742 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.742 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.742 15:29:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.742 [2024-07-15 15:29:35.748612] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:40.742 [2024-07-15 15:29:35.748890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63209 ] 00:06:41.000 [2024-07-15 15:29:35.887575] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.000 [2024-07-15 15:29:35.887622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.000 [2024-07-15 15:29:35.948878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.000 [2024-07-15 15:29:35.949004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.000 [2024-07-15 15:29:35.949010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63239 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63239 /var/tmp/spdk2.sock 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63239 ']' 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.936 15:29:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.936 [2024-07-15 15:29:36.785195] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:41.936 [2024-07-15 15:29:36.785491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63239 ] 00:06:41.936 [2024-07-15 15:29:36.931878] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.936 [2024-07-15 15:29:36.931931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.936 [2024-07-15 15:29:37.047277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.936 [2024-07-15 15:29:37.050616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.936 [2024-07-15 15:29:37.050616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.945 [2024-07-15 15:29:37.820649] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63209 has claimed it. 00:06:42.945 2024/07/15 15:29:37 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:42.945 request: 00:06:42.945 { 00:06:42.945 "method": "framework_enable_cpumask_locks", 00:06:42.945 "params": {} 00:06:42.945 } 00:06:42.945 Got JSON-RPC error response 00:06:42.945 GoRPCClient: error on JSON-RPC call 00:06:42.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63209 /var/tmp/spdk.sock 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63209 ']' 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.945 15:29:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.203 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.203 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.203 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63239 /var/tmp/spdk2.sock 00:06:43.203 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63239 ']' 00:06:43.203 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.203 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.203 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.203 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.203 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.461 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.461 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.461 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:43.461 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.461 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.461 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.461 ************************************ 00:06:43.461 END TEST locking_overlapped_coremask_via_rpc 00:06:43.461 ************************************ 00:06:43.461 00:06:43.461 real 0m2.754s 00:06:43.461 user 0m1.435s 00:06:43.461 sys 0m0.247s 00:06:43.461 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.461 15:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:43.461 15:29:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:43.461 15:29:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63209 ]] 00:06:43.461 15:29:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63209 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63209 ']' 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63209 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63209 00:06:43.461 killing process with pid 63209 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63209' 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63209 00:06:43.461 15:29:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63209 00:06:43.719 15:29:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63239 ]] 00:06:43.719 15:29:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63239 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63239 ']' 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63239 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63239 00:06:43.719 killing process with pid 63239 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63239' 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63239 00:06:43.719 15:29:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63239 00:06:43.977 15:29:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.977 15:29:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:43.977 15:29:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63209 ]] 00:06:43.977 15:29:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63209 00:06:43.977 15:29:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63209 ']' 00:06:43.977 15:29:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63209 00:06:43.977 Process with pid 63209 is not found 00:06:43.977 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63209) - No such process 00:06:43.977 15:29:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63209 is not found' 00:06:43.977 Process with pid 63239 is not found 00:06:43.977 15:29:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63239 ]] 00:06:43.977 15:29:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63239 00:06:43.977 15:29:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63239 ']' 00:06:43.977 15:29:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63239 00:06:43.977 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63239) - No such process 00:06:43.977 15:29:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63239 is not found' 00:06:43.977 15:29:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.977 ************************************ 00:06:43.977 END TEST cpu_locks 00:06:43.977 ************************************ 00:06:43.977 00:06:43.977 real 0m18.783s 00:06:43.977 user 0m35.178s 00:06:43.977 sys 0m4.461s 00:06:43.977 15:29:39 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.977 15:29:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.977 15:29:39 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.977 00:06:43.977 real 0m44.158s 00:06:43.977 user 1m29.260s 00:06:43.977 sys 0m7.973s 00:06:43.977 ************************************ 00:06:43.977 END TEST event 00:06:43.977 ************************************ 00:06:43.977 15:29:39 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.977 15:29:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.234 15:29:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:44.234 15:29:39 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.234 15:29:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.234 15:29:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.234 15:29:39 -- common/autotest_common.sh@10 -- # set +x 00:06:44.234 ************************************ 00:06:44.234 START TEST thread 00:06:44.234 ************************************ 00:06:44.234 15:29:39 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.234 * Looking for test storage... 00:06:44.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:44.234 15:29:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.234 15:29:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:44.234 15:29:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.234 15:29:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.234 ************************************ 00:06:44.234 START TEST thread_poller_perf 00:06:44.234 ************************************ 00:06:44.234 15:29:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.234 [2024-07-15 15:29:39.225133] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:44.234 [2024-07-15 15:29:39.225227] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:06:44.491 [2024-07-15 15:29:39.359756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.491 [2024-07-15 15:29:39.431915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.491 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:45.425 ====================================== 00:06:45.425 busy:2210928098 (cyc) 00:06:45.425 total_run_count: 282000 00:06:45.425 tsc_hz: 2200000000 (cyc) 00:06:45.425 ====================================== 00:06:45.425 poller_cost: 7840 (cyc), 3563 (nsec) 00:06:45.425 00:06:45.425 real 0m1.303s 00:06:45.425 user 0m1.165s 00:06:45.425 sys 0m0.032s 00:06:45.425 15:29:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.425 15:29:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.425 ************************************ 00:06:45.425 END TEST thread_poller_perf 00:06:45.425 ************************************ 00:06:45.425 15:29:40 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:45.425 15:29:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.425 15:29:40 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:45.425 15:29:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.425 15:29:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.684 ************************************ 00:06:45.684 START TEST thread_poller_perf 00:06:45.684 ************************************ 00:06:45.684 15:29:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.684 [2024-07-15 15:29:40.583132] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:45.684 [2024-07-15 15:29:40.583421] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63415 ] 00:06:45.684 [2024-07-15 15:29:40.723329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.684 [2024-07-15 15:29:40.783435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.684 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:47.060 ====================================== 00:06:47.060 busy:2202233094 (cyc) 00:06:47.060 total_run_count: 3959000 00:06:47.060 tsc_hz: 2200000000 (cyc) 00:06:47.060 ====================================== 00:06:47.060 poller_cost: 556 (cyc), 252 (nsec) 00:06:47.060 ************************************ 00:06:47.060 END TEST thread_poller_perf 00:06:47.060 ************************************ 00:06:47.060 00:06:47.061 real 0m1.296s 00:06:47.061 user 0m1.148s 00:06:47.061 sys 0m0.040s 00:06:47.061 15:29:41 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.061 15:29:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.061 15:29:41 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:47.061 15:29:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:47.061 ************************************ 00:06:47.061 END TEST thread 00:06:47.061 ************************************ 00:06:47.061 00:06:47.061 real 0m2.779s 00:06:47.061 user 0m2.387s 00:06:47.061 sys 0m0.172s 00:06:47.061 15:29:41 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.061 15:29:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.061 15:29:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.061 15:29:41 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:47.061 15:29:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.061 15:29:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.061 15:29:41 -- common/autotest_common.sh@10 -- # set +x 00:06:47.061 ************************************ 00:06:47.061 START TEST accel 00:06:47.061 ************************************ 00:06:47.061 15:29:41 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:47.061 * Looking for test storage... 00:06:47.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:47.061 15:29:42 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:47.061 15:29:42 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:47.061 15:29:42 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.061 15:29:42 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63490 00:06:47.061 15:29:42 accel -- accel/accel.sh@63 -- # waitforlisten 63490 00:06:47.061 15:29:42 accel -- common/autotest_common.sh@829 -- # '[' -z 63490 ']' 00:06:47.061 15:29:42 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.061 15:29:42 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.061 15:29:42 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.061 15:29:42 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.061 15:29:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.061 15:29:42 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:47.061 15:29:42 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:47.061 15:29:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.061 15:29:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.061 15:29:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.061 15:29:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.061 15:29:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.061 15:29:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:47.061 15:29:42 accel -- accel/accel.sh@41 -- # jq -r . 00:06:47.061 [2024-07-15 15:29:42.098960] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:47.061 [2024-07-15 15:29:42.099248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63490 ] 00:06:47.319 [2024-07-15 15:29:42.236809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.319 [2024-07-15 15:29:42.305883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.578 15:29:42 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.578 15:29:42 accel -- common/autotest_common.sh@862 -- # return 0 00:06:47.578 15:29:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:47.578 15:29:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:47.578 15:29:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:47.578 15:29:42 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:47.578 15:29:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:47.578 15:29:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:47.578 15:29:42 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.578 15:29:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.579 15:29:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.579 15:29:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.579 15:29:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.579 15:29:42 accel -- accel/accel.sh@75 -- # killprocess 63490 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@948 -- # '[' -z 63490 ']' 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@952 -- # kill -0 63490 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@953 -- # uname 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63490 00:06:47.579 killing process with pid 63490 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63490' 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@967 -- # kill 63490 00:06:47.579 15:29:42 accel -- common/autotest_common.sh@972 -- # wait 63490 00:06:47.838 15:29:42 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:47.838 15:29:42 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:47.838 15:29:42 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.838 15:29:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.838 15:29:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.838 15:29:42 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:47.838 15:29:42 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:47.838 15:29:42 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:47.838 15:29:42 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.838 15:29:42 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.838 15:29:42 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.838 15:29:42 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.838 15:29:42 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.838 15:29:42 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:47.838 15:29:42 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:47.838 15:29:42 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.838 15:29:42 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:47.838 15:29:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.838 15:29:42 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:47.838 15:29:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:47.838 15:29:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.838 15:29:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.838 ************************************ 00:06:47.838 START TEST accel_missing_filename 00:06:47.838 ************************************ 00:06:47.838 15:29:42 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:47.838 15:29:42 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:47.838 15:29:42 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:47.838 15:29:42 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:47.838 15:29:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.838 15:29:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:47.838 15:29:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.838 15:29:42 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:47.838 15:29:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:47.838 15:29:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:47.838 15:29:42 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.838 15:29:42 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.838 15:29:42 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.838 15:29:42 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.838 15:29:42 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.838 15:29:42 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:47.838 15:29:42 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:47.838 [2024-07-15 15:29:42.920216] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:47.838 [2024-07-15 15:29:42.920304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63546 ] 00:06:48.097 [2024-07-15 15:29:43.061207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.097 [2024-07-15 15:29:43.131505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.097 [2024-07-15 15:29:43.165131] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.097 [2024-07-15 15:29:43.208798] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:48.356 A filename is required. 00:06:48.356 15:29:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:48.356 15:29:43 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.356 15:29:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:48.356 15:29:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.356 15:29:43 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:48.356 15:29:43 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.356 00:06:48.356 real 0m0.392s 00:06:48.356 user 0m0.263s 00:06:48.356 sys 0m0.071s 00:06:48.356 15:29:43 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.356 15:29:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:48.356 ************************************ 00:06:48.356 END TEST accel_missing_filename 00:06:48.356 ************************************ 00:06:48.356 15:29:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.356 15:29:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.356 15:29:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:48.356 15:29:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.356 15:29:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.356 ************************************ 00:06:48.356 START TEST accel_compress_verify 00:06:48.356 ************************************ 00:06:48.356 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.356 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:48.356 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.356 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.356 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.356 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.356 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.356 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.356 15:29:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:48.356 15:29:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:48.356 15:29:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.356 15:29:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.356 15:29:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.356 15:29:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.356 15:29:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.356 15:29:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:48.356 15:29:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:48.356 [2024-07-15 15:29:43.354236] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:48.356 [2024-07-15 15:29:43.354340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63570 ] 00:06:48.614 [2024-07-15 15:29:43.486084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.614 [2024-07-15 15:29:43.546612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.614 [2024-07-15 15:29:43.581017] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.614 [2024-07-15 15:29:43.622188] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:48.614 00:06:48.614 Compression does not support the verify option, aborting. 00:06:48.614 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:48.614 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.614 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:48.614 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.614 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:48.614 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.614 00:06:48.614 real 0m0.368s 00:06:48.614 user 0m0.234s 00:06:48.614 sys 0m0.073s 00:06:48.614 ************************************ 00:06:48.614 END TEST accel_compress_verify 00:06:48.614 ************************************ 00:06:48.614 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.614 15:29:43 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:48.614 15:29:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.614 15:29:43 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:48.614 15:29:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.614 15:29:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.614 15:29:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.872 ************************************ 00:06:48.872 START TEST accel_wrong_workload 00:06:48.872 ************************************ 00:06:48.872 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:48.872 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:48.872 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:48.872 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.872 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.872 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.872 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.872 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:48.872 15:29:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:48.872 15:29:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:48.872 15:29:43 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.872 15:29:43 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.872 15:29:43 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.872 15:29:43 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.872 15:29:43 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.872 15:29:43 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:48.872 15:29:43 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:48.872 Unsupported workload type: foobar 00:06:48.872 [2024-07-15 15:29:43.769590] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:48.872 accel_perf options: 00:06:48.872 [-h help message] 00:06:48.872 [-q queue depth per core] 00:06:48.872 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.872 [-T number of threads per core 00:06:48.872 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.872 [-t time in seconds] 00:06:48.872 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.872 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.872 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.872 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.873 [-S for crc32c workload, use this seed value (default 0) 00:06:48.873 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.873 [-f for fill workload, use this BYTE value (default 255) 00:06:48.873 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.873 [-y verify result if this switch is on] 00:06:48.873 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.873 Can be used to spread operations across a wider range of memory. 00:06:48.873 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:48.873 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.873 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.873 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.873 00:06:48.873 real 0m0.030s 00:06:48.873 user 0m0.015s 00:06:48.873 sys 0m0.014s 00:06:48.873 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.873 ************************************ 00:06:48.873 END TEST accel_wrong_workload 00:06:48.873 ************************************ 00:06:48.873 15:29:43 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:48.873 15:29:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.873 15:29:43 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.873 15:29:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:48.873 15:29:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.873 15:29:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.873 ************************************ 00:06:48.873 START TEST accel_negative_buffers 00:06:48.873 ************************************ 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:48.873 15:29:43 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:48.873 15:29:43 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:48.873 15:29:43 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.873 15:29:43 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.873 15:29:43 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.873 15:29:43 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.873 15:29:43 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.873 15:29:43 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:48.873 15:29:43 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:48.873 -x option must be non-negative. 00:06:48.873 [2024-07-15 15:29:43.842741] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:48.873 accel_perf options: 00:06:48.873 [-h help message] 00:06:48.873 [-q queue depth per core] 00:06:48.873 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.873 [-T number of threads per core 00:06:48.873 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.873 [-t time in seconds] 00:06:48.873 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.873 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.873 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.873 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.873 [-S for crc32c workload, use this seed value (default 0) 00:06:48.873 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.873 [-f for fill workload, use this BYTE value (default 255) 00:06:48.873 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.873 [-y verify result if this switch is on] 00:06:48.873 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.873 Can be used to spread operations across a wider range of memory. 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.873 00:06:48.873 real 0m0.030s 00:06:48.873 user 0m0.019s 00:06:48.873 sys 0m0.011s 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.873 ************************************ 00:06:48.873 END TEST accel_negative_buffers 00:06:48.873 ************************************ 00:06:48.873 15:29:43 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:48.873 15:29:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.873 15:29:43 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:48.873 15:29:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:48.873 15:29:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.873 15:29:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.873 ************************************ 00:06:48.873 START TEST accel_crc32c 00:06:48.873 ************************************ 00:06:48.873 15:29:43 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:48.873 15:29:43 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:48.873 [2024-07-15 15:29:43.919581] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:48.873 [2024-07-15 15:29:43.919723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63629 ] 00:06:49.131 [2024-07-15 15:29:44.060643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.131 [2024-07-15 15:29:44.130598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.131 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.132 15:29:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:50.507 15:29:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.507 00:06:50.507 real 0m1.384s 00:06:50.507 user 0m0.015s 00:06:50.507 sys 0m0.000s 00:06:50.507 15:29:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.507 15:29:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:50.507 ************************************ 00:06:50.507 END TEST accel_crc32c 00:06:50.507 ************************************ 00:06:50.507 15:29:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.507 15:29:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:50.507 15:29:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:50.507 15:29:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.507 15:29:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.507 ************************************ 00:06:50.507 START TEST accel_crc32c_C2 00:06:50.507 ************************************ 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:50.507 [2024-07-15 15:29:45.349424] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:50.507 [2024-07-15 15:29:45.349534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63658 ] 00:06:50.507 [2024-07-15 15:29:45.485845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.507 [2024-07-15 15:29:45.544882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.507 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.508 15:29:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.892 00:06:51.892 real 0m1.369s 00:06:51.892 user 0m0.013s 00:06:51.892 sys 0m0.003s 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.892 15:29:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:51.892 ************************************ 00:06:51.892 END TEST accel_crc32c_C2 00:06:51.892 ************************************ 00:06:51.892 15:29:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.892 15:29:46 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:51.892 15:29:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:51.892 15:29:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.892 15:29:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.892 ************************************ 00:06:51.892 START TEST accel_copy 00:06:51.892 ************************************ 00:06:51.892 15:29:46 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:51.892 [2024-07-15 15:29:46.768007] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:51.892 [2024-07-15 15:29:46.768088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63692 ] 00:06:51.892 [2024-07-15 15:29:46.903502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.892 [2024-07-15 15:29:46.963191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.892 15:29:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.893 15:29:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:53.267 15:29:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.267 00:06:53.267 real 0m1.367s 00:06:53.267 user 0m1.203s 00:06:53.267 sys 0m0.071s 00:06:53.267 15:29:48 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.267 15:29:48 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.267 ************************************ 00:06:53.267 END TEST accel_copy 00:06:53.267 ************************************ 00:06:53.267 15:29:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.267 15:29:48 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.268 15:29:48 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:53.268 15:29:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.268 15:29:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.268 ************************************ 00:06:53.268 START TEST accel_fill 00:06:53.268 ************************************ 00:06:53.268 15:29:48 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:53.268 15:29:48 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:53.268 [2024-07-15 15:29:48.185717] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:53.268 [2024-07-15 15:29:48.185805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63727 ] 00:06:53.268 [2024-07-15 15:29:48.323436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.268 [2024-07-15 15:29:48.383484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.526 15:29:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:54.462 15:29:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.462 00:06:54.462 real 0m1.373s 00:06:54.462 user 0m1.206s 00:06:54.462 sys 0m0.077s 00:06:54.462 15:29:49 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.462 15:29:49 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:54.462 ************************************ 00:06:54.462 END TEST accel_fill 00:06:54.462 ************************************ 00:06:54.462 15:29:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.462 15:29:49 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:54.462 15:29:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:54.462 15:29:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.462 15:29:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.462 ************************************ 00:06:54.462 START TEST accel_copy_crc32c 00:06:54.462 ************************************ 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:54.462 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:54.721 [2024-07-15 15:29:49.604246] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:54.721 [2024-07-15 15:29:49.604335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63760 ] 00:06:54.721 [2024-07-15 15:29:49.743056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.721 [2024-07-15 15:29:49.802566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.721 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.722 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.980 15:29:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.916 00:06:55.916 real 0m1.371s 00:06:55.916 user 0m1.201s 00:06:55.916 sys 0m0.078s 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.916 ************************************ 00:06:55.916 END TEST accel_copy_crc32c 00:06:55.916 ************************************ 00:06:55.916 15:29:50 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:55.916 15:29:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.916 15:29:50 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:55.916 15:29:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:55.916 15:29:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.916 15:29:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.916 ************************************ 00:06:55.916 START TEST accel_copy_crc32c_C2 00:06:55.916 ************************************ 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.916 15:29:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:55.916 [2024-07-15 15:29:51.021657] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:55.916 [2024-07-15 15:29:51.021743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63796 ] 00:06:56.175 [2024-07-15 15:29:51.157610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.175 [2024-07-15 15:29:51.217016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.175 15:29:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.551 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.551 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.551 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.551 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.551 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.551 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.551 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.551 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.552 00:06:57.552 real 0m1.375s 00:06:57.552 user 0m1.207s 00:06:57.552 sys 0m0.073s 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.552 15:29:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:57.552 ************************************ 00:06:57.552 END TEST accel_copy_crc32c_C2 00:06:57.552 ************************************ 00:06:57.552 15:29:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.552 15:29:52 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:57.552 15:29:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:57.552 15:29:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.552 15:29:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.552 ************************************ 00:06:57.552 START TEST accel_dualcast 00:06:57.552 ************************************ 00:06:57.552 15:29:52 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:57.552 [2024-07-15 15:29:52.440218] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:57.552 [2024-07-15 15:29:52.440836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63825 ] 00:06:57.552 [2024-07-15 15:29:52.583909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.552 [2024-07-15 15:29:52.644801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.552 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.811 15:29:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:58.745 15:29:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.745 00:06:58.745 real 0m1.373s 00:06:58.745 user 0m0.016s 00:06:58.745 sys 0m0.001s 00:06:58.745 ************************************ 00:06:58.745 END TEST accel_dualcast 00:06:58.745 ************************************ 00:06:58.745 15:29:53 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.745 15:29:53 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:58.745 15:29:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.745 15:29:53 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:58.745 15:29:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.745 15:29:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.745 15:29:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.745 ************************************ 00:06:58.745 START TEST accel_compare 00:06:58.745 ************************************ 00:06:58.745 15:29:53 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:58.745 15:29:53 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:58.745 [2024-07-15 15:29:53.857753] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:58.745 [2024-07-15 15:29:53.857840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63859 ] 00:06:59.003 [2024-07-15 15:29:53.994778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.003 [2024-07-15 15:29:54.065001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.003 15:29:54 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.004 15:29:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.378 15:29:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:00.379 15:29:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.379 00:07:00.379 real 0m1.385s 00:07:00.379 user 0m1.208s 00:07:00.379 sys 0m0.081s 00:07:00.379 15:29:55 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.379 ************************************ 00:07:00.379 END TEST accel_compare 00:07:00.379 ************************************ 00:07:00.379 15:29:55 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:00.379 15:29:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.379 15:29:55 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:00.379 15:29:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:00.379 15:29:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.379 15:29:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.379 ************************************ 00:07:00.379 START TEST accel_xor 00:07:00.379 ************************************ 00:07:00.379 15:29:55 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:00.379 15:29:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:00.379 [2024-07-15 15:29:55.291212] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:00.379 [2024-07-15 15:29:55.291316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63894 ] 00:07:00.379 [2024-07-15 15:29:55.430348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.379 [2024-07-15 15:29:55.490481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.637 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.638 15:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.573 ************************************ 00:07:01.573 END TEST accel_xor 00:07:01.573 ************************************ 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.573 00:07:01.573 real 0m1.374s 00:07:01.573 user 0m1.200s 00:07:01.573 sys 0m0.077s 00:07:01.573 15:29:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.573 15:29:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:01.573 15:29:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.573 15:29:56 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:01.573 15:29:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:01.573 15:29:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.573 15:29:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.573 ************************************ 00:07:01.573 START TEST accel_xor 00:07:01.573 ************************************ 00:07:01.573 15:29:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:01.573 15:29:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:01.832 [2024-07-15 15:29:56.720228] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:01.832 [2024-07-15 15:29:56.720341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63923 ] 00:07:01.832 [2024-07-15 15:29:56.860945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.833 [2024-07-15 15:29:56.940580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.092 15:29:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:03.028 15:29:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.028 00:07:03.028 real 0m1.394s 00:07:03.028 user 0m1.219s 00:07:03.028 sys 0m0.077s 00:07:03.028 15:29:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.028 ************************************ 00:07:03.028 END TEST accel_xor 00:07:03.028 ************************************ 00:07:03.028 15:29:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:03.028 15:29:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.028 15:29:58 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:03.028 15:29:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:03.028 15:29:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.028 15:29:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.028 ************************************ 00:07:03.028 START TEST accel_dif_verify 00:07:03.028 ************************************ 00:07:03.028 15:29:58 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:03.028 15:29:58 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:03.287 [2024-07-15 15:29:58.159402] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:03.287 [2024-07-15 15:29:58.159485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63963 ] 00:07:03.287 [2024-07-15 15:29:58.296489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.287 [2024-07-15 15:29:58.365983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.287 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.546 15:29:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:04.478 15:29:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.478 00:07:04.478 real 0m1.380s 00:07:04.478 user 0m1.210s 00:07:04.478 sys 0m0.077s 00:07:04.478 15:29:59 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.478 ************************************ 00:07:04.478 END TEST accel_dif_verify 00:07:04.478 ************************************ 00:07:04.478 15:29:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:04.478 15:29:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.478 15:29:59 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:04.478 15:29:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:04.478 15:29:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.478 15:29:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.478 ************************************ 00:07:04.478 START TEST accel_dif_generate 00:07:04.478 ************************************ 00:07:04.478 15:29:59 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:04.478 15:29:59 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:04.478 [2024-07-15 15:29:59.587643] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:04.478 [2024-07-15 15:29:59.587730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63992 ] 00:07:04.737 [2024-07-15 15:29:59.724235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.737 [2024-07-15 15:29:59.793594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.737 15:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 ************************************ 00:07:06.171 END TEST accel_dif_generate 00:07:06.171 ************************************ 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:06.171 15:30:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.171 00:07:06.171 real 0m1.383s 00:07:06.171 user 0m1.214s 00:07:06.171 sys 0m0.072s 00:07:06.171 15:30:00 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.171 15:30:00 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:06.171 15:30:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.171 15:30:00 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:06.171 15:30:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:06.171 15:30:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.171 15:30:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.171 ************************************ 00:07:06.171 START TEST accel_dif_generate_copy 00:07:06.171 ************************************ 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:06.171 15:30:00 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:06.171 [2024-07-15 15:30:01.015212] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:06.171 [2024-07-15 15:30:01.015329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64021 ] 00:07:06.171 [2024-07-15 15:30:01.151859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.171 [2024-07-15 15:30:01.212979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:06.171 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.172 15:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.543 00:07:07.543 real 0m1.376s 00:07:07.543 user 0m1.201s 00:07:07.543 sys 0m0.077s 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.543 15:30:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:07.543 ************************************ 00:07:07.543 END TEST accel_dif_generate_copy 00:07:07.543 ************************************ 00:07:07.543 15:30:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.543 15:30:02 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:07.543 15:30:02 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.543 15:30:02 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:07.543 15:30:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.543 15:30:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.543 ************************************ 00:07:07.543 START TEST accel_comp 00:07:07.543 ************************************ 00:07:07.543 15:30:02 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:07.543 15:30:02 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:07.543 [2024-07-15 15:30:02.441248] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:07.543 [2024-07-15 15:30:02.441355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64062 ] 00:07:07.543 [2024-07-15 15:30:02.576269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.543 [2024-07-15 15:30:02.635818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.802 15:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.734 ************************************ 00:07:08.734 END TEST accel_comp 00:07:08.734 ************************************ 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:08.734 15:30:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.734 00:07:08.734 real 0m1.371s 00:07:08.734 user 0m1.209s 00:07:08.734 sys 0m0.064s 00:07:08.734 15:30:03 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.734 15:30:03 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:08.734 15:30:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.734 15:30:03 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.734 15:30:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:08.734 15:30:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.734 15:30:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.734 ************************************ 00:07:08.734 START TEST accel_decomp 00:07:08.734 ************************************ 00:07:08.734 15:30:03 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:08.734 15:30:03 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:08.735 [2024-07-15 15:30:03.851797] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:08.735 [2024-07-15 15:30:03.851873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64093 ] 00:07:08.992 [2024-07-15 15:30:03.985307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.992 [2024-07-15 15:30:04.055672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.992 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.993 15:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:10.364 15:30:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.364 00:07:10.364 real 0m1.385s 00:07:10.364 user 0m1.200s 00:07:10.364 sys 0m0.086s 00:07:10.364 15:30:05 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.364 15:30:05 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:10.364 ************************************ 00:07:10.364 END TEST accel_decomp 00:07:10.364 ************************************ 00:07:10.364 15:30:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.364 15:30:05 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.364 15:30:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:10.364 15:30:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.364 15:30:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.364 ************************************ 00:07:10.364 START TEST accel_decomp_full 00:07:10.364 ************************************ 00:07:10.364 15:30:05 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:10.364 15:30:05 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:10.364 [2024-07-15 15:30:05.285490] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:10.364 [2024-07-15 15:30:05.285578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64128 ] 00:07:10.364 [2024-07-15 15:30:05.419127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.364 [2024-07-15 15:30:05.477418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.621 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.622 15:30:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.553 15:30:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.553 00:07:11.553 real 0m1.384s 00:07:11.553 user 0m1.205s 00:07:11.553 sys 0m0.080s 00:07:11.553 15:30:06 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.553 15:30:06 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:11.553 ************************************ 00:07:11.553 END TEST accel_decomp_full 00:07:11.553 ************************************ 00:07:11.812 15:30:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.812 15:30:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.812 15:30:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:11.812 15:30:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.812 15:30:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.812 ************************************ 00:07:11.812 START TEST accel_decomp_mcore 00:07:11.812 ************************************ 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:11.812 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:11.812 [2024-07-15 15:30:06.716395] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:11.812 [2024-07-15 15:30:06.716482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64162 ] 00:07:11.812 [2024-07-15 15:30:06.846961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.812 [2024-07-15 15:30:06.911102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.812 [2024-07-15 15:30:06.911221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.812 [2024-07-15 15:30:06.911348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.812 [2024-07-15 15:30:06.911348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.069 15:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.003 00:07:13.003 real 0m1.397s 00:07:13.003 user 0m4.432s 00:07:13.003 sys 0m0.101s 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.003 15:30:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:13.003 ************************************ 00:07:13.003 END TEST accel_decomp_mcore 00:07:13.003 ************************************ 00:07:13.003 15:30:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.003 15:30:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.003 15:30:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:13.003 15:30:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.003 15:30:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.003 ************************************ 00:07:13.003 START TEST accel_decomp_full_mcore 00:07:13.003 ************************************ 00:07:13.003 15:30:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.003 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:13.003 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:13.261 [2024-07-15 15:30:08.149964] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:13.261 [2024-07-15 15:30:08.150044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64194 ] 00:07:13.261 [2024-07-15 15:30:08.284617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.261 [2024-07-15 15:30:08.347579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.261 [2024-07-15 15:30:08.347743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.261 [2024-07-15 15:30:08.347877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.261 [2024-07-15 15:30:08.347881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.261 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.262 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.520 15:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.455 00:07:14.455 real 0m1.384s 00:07:14.455 user 0m0.012s 00:07:14.455 sys 0m0.004s 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.455 15:30:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:14.455 ************************************ 00:07:14.455 END TEST accel_decomp_full_mcore 00:07:14.455 ************************************ 00:07:14.455 15:30:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.455 15:30:09 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.455 15:30:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:14.455 15:30:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.456 15:30:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.456 ************************************ 00:07:14.456 START TEST accel_decomp_mthread 00:07:14.456 ************************************ 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:14.456 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:14.456 [2024-07-15 15:30:09.579641] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:14.456 [2024-07-15 15:30:09.579739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64237 ] 00:07:14.714 [2024-07-15 15:30:09.719113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.714 [2024-07-15 15:30:09.778311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.714 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.714 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.714 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.715 15:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.091 00:07:16.091 real 0m1.379s 00:07:16.091 user 0m1.203s 00:07:16.091 sys 0m0.086s 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.091 15:30:10 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:16.091 ************************************ 00:07:16.091 END TEST accel_decomp_mthread 00:07:16.091 ************************************ 00:07:16.091 15:30:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.091 15:30:10 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.091 15:30:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:16.091 15:30:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.091 15:30:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.091 ************************************ 00:07:16.091 START TEST accel_decomp_full_mthread 00:07:16.092 ************************************ 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:16.092 15:30:10 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:16.092 [2024-07-15 15:30:10.996775] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:16.092 [2024-07-15 15:30:10.996851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64266 ] 00:07:16.092 [2024-07-15 15:30:11.133215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.092 [2024-07-15 15:30:11.201675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.350 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.351 15:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.286 00:07:17.286 real 0m1.407s 00:07:17.286 user 0m1.239s 00:07:17.286 sys 0m0.076s 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.286 15:30:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:17.286 ************************************ 00:07:17.286 END TEST accel_decomp_full_mthread 00:07:17.286 ************************************ 00:07:17.554 15:30:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.554 15:30:12 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:17.554 15:30:12 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:17.554 15:30:12 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:17.554 15:30:12 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:17.554 15:30:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.554 15:30:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.554 15:30:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.554 15:30:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.554 15:30:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.554 15:30:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.554 15:30:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.554 15:30:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:17.554 15:30:12 accel -- accel/accel.sh@41 -- # jq -r . 00:07:17.554 ************************************ 00:07:17.554 START TEST accel_dif_functional_tests 00:07:17.554 ************************************ 00:07:17.554 15:30:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:17.554 [2024-07-15 15:30:12.485426] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:17.555 [2024-07-15 15:30:12.485535] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64304 ] 00:07:17.555 [2024-07-15 15:30:12.624449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.820 [2024-07-15 15:30:12.684609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.820 [2024-07-15 15:30:12.684708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.820 [2024-07-15 15:30:12.684708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.820 00:07:17.820 00:07:17.820 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.820 http://cunit.sourceforge.net/ 00:07:17.820 00:07:17.820 00:07:17.820 Suite: accel_dif 00:07:17.820 Test: verify: DIF generated, GUARD check ...passed 00:07:17.820 Test: verify: DIF generated, APPTAG check ...passed 00:07:17.820 Test: verify: DIF generated, REFTAG check ...passed 00:07:17.820 Test: verify: DIF not generated, GUARD check ...passed 00:07:17.820 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 15:30:12.734158] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:17.820 [2024-07-15 15:30:12.734236] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:17.820 passed 00:07:17.820 Test: verify: DIF not generated, REFTAG check ...passed 00:07:17.820 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:17.820 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:17.820 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:17.820 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-15 15:30:12.734270] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:17.820 [2024-07-15 15:30:12.734334] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:17.820 passed 00:07:17.820 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:17.820 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:17.820 Test: verify copy: DIF generated, GUARD check ...passed 00:07:17.820 Test: verify copy: DIF generated, APPTAG check ...[2024-07-15 15:30:12.734478] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:17.820 passed 00:07:17.820 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:17.820 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:17.820 Test: verify copy: DIF not generated, APPTAG check ...passed 00:07:17.820 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 15:30:12.734685] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:17.820 [2024-07-15 15:30:12.734735] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:17.820 passed 00:07:17.820 Test: generate copy: DIF generated, GUARD check ...passed 00:07:17.820 Test: generate copy: DIF generated, APTTAG check ...passed[2024-07-15 15:30:12.734771] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:17.820 00:07:17.820 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:17.820 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:17.820 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:17.820 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:17.820 Test: generate copy: iovecs-len validate ...passed 00:07:17.820 Test: generate copy: buffer alignment validate ...passed 00:07:17.820 00:07:17.820 [2024-07-15 15:30:12.735020] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:17.820 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.820 suites 1 1 n/a 0 0 00:07:17.820 tests 26 26 26 0 0 00:07:17.820 asserts 115 115 115 0 n/a 00:07:17.820 00:07:17.820 Elapsed time = 0.002 seconds 00:07:17.820 00:07:17.820 real 0m0.444s 00:07:17.820 user 0m0.514s 00:07:17.820 sys 0m0.097s 00:07:17.820 15:30:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.820 15:30:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:17.820 ************************************ 00:07:17.820 END TEST accel_dif_functional_tests 00:07:17.820 ************************************ 00:07:17.820 15:30:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.820 00:07:17.820 real 0m30.963s 00:07:17.820 user 0m33.062s 00:07:17.820 sys 0m2.826s 00:07:17.820 15:30:12 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.820 15:30:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.820 ************************************ 00:07:17.820 END TEST accel 00:07:17.820 ************************************ 00:07:18.078 15:30:12 -- common/autotest_common.sh@1142 -- # return 0 00:07:18.078 15:30:12 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:18.078 15:30:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.078 15:30:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.078 15:30:12 -- common/autotest_common.sh@10 -- # set +x 00:07:18.078 ************************************ 00:07:18.078 START TEST accel_rpc 00:07:18.078 ************************************ 00:07:18.078 15:30:12 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:18.078 * Looking for test storage... 00:07:18.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:18.078 15:30:13 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:18.078 15:30:13 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64368 00:07:18.078 15:30:13 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64368 00:07:18.078 15:30:13 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:18.078 15:30:13 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64368 ']' 00:07:18.078 15:30:13 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.078 15:30:13 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.078 15:30:13 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.078 15:30:13 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.078 15:30:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.078 [2024-07-15 15:30:13.104784] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:18.078 [2024-07-15 15:30:13.104879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64368 ] 00:07:18.336 [2024-07-15 15:30:13.241672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.336 [2024-07-15 15:30:13.341813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:19.271 15:30:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:19.271 15:30:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:19.271 15:30:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:19.271 15:30:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:19.271 15:30:14 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.271 ************************************ 00:07:19.271 START TEST accel_assign_opcode 00:07:19.271 ************************************ 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.271 [2024-07-15 15:30:14.154340] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.271 [2024-07-15 15:30:14.162330] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.271 software 00:07:19.271 00:07:19.271 real 0m0.197s 00:07:19.271 user 0m0.046s 00:07:19.271 sys 0m0.013s 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.271 15:30:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.271 ************************************ 00:07:19.271 END TEST accel_assign_opcode 00:07:19.271 ************************************ 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:19.271 15:30:14 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64368 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64368 ']' 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64368 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.271 15:30:14 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64368 00:07:19.530 15:30:14 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.530 15:30:14 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.530 killing process with pid 64368 00:07:19.530 15:30:14 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64368' 00:07:19.530 15:30:14 accel_rpc -- common/autotest_common.sh@967 -- # kill 64368 00:07:19.530 15:30:14 accel_rpc -- common/autotest_common.sh@972 -- # wait 64368 00:07:19.530 00:07:19.530 real 0m1.687s 00:07:19.530 user 0m1.952s 00:07:19.530 sys 0m0.338s 00:07:19.530 15:30:14 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.530 15:30:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.530 ************************************ 00:07:19.530 END TEST accel_rpc 00:07:19.530 ************************************ 00:07:19.788 15:30:14 -- common/autotest_common.sh@1142 -- # return 0 00:07:19.788 15:30:14 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:19.788 15:30:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.788 15:30:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.788 15:30:14 -- common/autotest_common.sh@10 -- # set +x 00:07:19.788 ************************************ 00:07:19.788 START TEST app_cmdline 00:07:19.788 ************************************ 00:07:19.788 15:30:14 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:19.788 * Looking for test storage... 00:07:19.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:19.788 15:30:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:19.788 15:30:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64479 00:07:19.788 15:30:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64479 00:07:19.788 15:30:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:19.788 15:30:14 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64479 ']' 00:07:19.788 15:30:14 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.788 15:30:14 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.788 15:30:14 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.788 15:30:14 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.788 15:30:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:19.788 [2024-07-15 15:30:14.835155] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:19.788 [2024-07-15 15:30:14.835251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64479 ] 00:07:20.047 [2024-07-15 15:30:14.973769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.047 [2024-07-15 15:30:15.043247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.305 15:30:15 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.306 15:30:15 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:20.306 15:30:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:20.564 { 00:07:20.564 "fields": { 00:07:20.564 "commit": "d8f06a5fe", 00:07:20.564 "major": 24, 00:07:20.564 "minor": 9, 00:07:20.564 "patch": 0, 00:07:20.564 "suffix": "-pre" 00:07:20.564 }, 00:07:20.564 "version": "SPDK v24.09-pre git sha1 d8f06a5fe" 00:07:20.564 } 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:20.564 15:30:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:20.564 15:30:15 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.824 2024/07/15 15:30:15 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:20.824 request: 00:07:20.824 { 00:07:20.824 "method": "env_dpdk_get_mem_stats", 00:07:20.824 "params": {} 00:07:20.824 } 00:07:20.824 Got JSON-RPC error response 00:07:20.824 GoRPCClient: error on JSON-RPC call 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.824 15:30:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64479 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64479 ']' 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64479 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64479 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.824 killing process with pid 64479 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64479' 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@967 -- # kill 64479 00:07:20.824 15:30:15 app_cmdline -- common/autotest_common.sh@972 -- # wait 64479 00:07:21.082 00:07:21.082 real 0m1.406s 00:07:21.082 user 0m1.900s 00:07:21.082 sys 0m0.334s 00:07:21.082 15:30:16 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.082 15:30:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.082 ************************************ 00:07:21.082 END TEST app_cmdline 00:07:21.082 ************************************ 00:07:21.082 15:30:16 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.082 15:30:16 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:21.082 15:30:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.082 15:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.082 15:30:16 -- common/autotest_common.sh@10 -- # set +x 00:07:21.082 ************************************ 00:07:21.082 START TEST version 00:07:21.082 ************************************ 00:07:21.082 15:30:16 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:21.341 * Looking for test storage... 00:07:21.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:21.341 15:30:16 version -- app/version.sh@17 -- # get_header_version major 00:07:21.341 15:30:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.341 15:30:16 version -- app/version.sh@14 -- # cut -f2 00:07:21.341 15:30:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.341 15:30:16 version -- app/version.sh@17 -- # major=24 00:07:21.341 15:30:16 version -- app/version.sh@18 -- # get_header_version minor 00:07:21.341 15:30:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.341 15:30:16 version -- app/version.sh@14 -- # cut -f2 00:07:21.341 15:30:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.341 15:30:16 version -- app/version.sh@18 -- # minor=9 00:07:21.341 15:30:16 version -- app/version.sh@19 -- # get_header_version patch 00:07:21.341 15:30:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.341 15:30:16 version -- app/version.sh@14 -- # cut -f2 00:07:21.341 15:30:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.341 15:30:16 version -- app/version.sh@19 -- # patch=0 00:07:21.341 15:30:16 version -- app/version.sh@20 -- # get_header_version suffix 00:07:21.341 15:30:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:21.341 15:30:16 version -- app/version.sh@14 -- # cut -f2 00:07:21.341 15:30:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.341 15:30:16 version -- app/version.sh@20 -- # suffix=-pre 00:07:21.341 15:30:16 version -- app/version.sh@22 -- # version=24.9 00:07:21.341 15:30:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:21.341 15:30:16 version -- app/version.sh@28 -- # version=24.9rc0 00:07:21.341 15:30:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:21.341 15:30:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:21.341 15:30:16 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:21.341 15:30:16 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:21.341 00:07:21.341 real 0m0.145s 00:07:21.341 user 0m0.090s 00:07:21.341 sys 0m0.081s 00:07:21.341 15:30:16 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.341 15:30:16 version -- common/autotest_common.sh@10 -- # set +x 00:07:21.341 ************************************ 00:07:21.341 END TEST version 00:07:21.341 ************************************ 00:07:21.341 15:30:16 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.341 15:30:16 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:21.341 15:30:16 -- spdk/autotest.sh@198 -- # uname -s 00:07:21.341 15:30:16 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:21.341 15:30:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:21.341 15:30:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:21.341 15:30:16 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:21.341 15:30:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:21.341 15:30:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:21.341 15:30:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.341 15:30:16 -- common/autotest_common.sh@10 -- # set +x 00:07:21.341 15:30:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:21.341 15:30:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:21.341 15:30:16 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:21.341 15:30:16 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:21.341 15:30:16 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:21.341 15:30:16 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:21.341 15:30:16 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.341 15:30:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.341 15:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.341 15:30:16 -- common/autotest_common.sh@10 -- # set +x 00:07:21.341 ************************************ 00:07:21.341 START TEST nvmf_tcp 00:07:21.341 ************************************ 00:07:21.341 15:30:16 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.341 * Looking for test storage... 00:07:21.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.341 15:30:16 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.600 15:30:16 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:21.600 15:30:16 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:21.600 15:30:16 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.600 15:30:16 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.600 15:30:16 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:21.600 15:30:16 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.600 15:30:16 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.600 15:30:16 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.600 15:30:16 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.600 15:30:16 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.600 15:30:16 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.600 15:30:16 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.600 15:30:16 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.600 15:30:16 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:21.600 15:30:16 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:21.601 15:30:16 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.601 15:30:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:21.601 15:30:16 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:21.601 15:30:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.601 15:30:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.601 15:30:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.601 ************************************ 00:07:21.601 START TEST nvmf_example 00:07:21.601 ************************************ 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:21.601 * Looking for test storage... 00:07:21.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:21.601 Cannot find device "nvmf_init_br" 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:21.601 Cannot find device "nvmf_tgt_br" 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:21.601 Cannot find device "nvmf_tgt_br2" 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:21.601 Cannot find device "nvmf_init_br" 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:21.601 Cannot find device "nvmf_tgt_br" 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:21.601 Cannot find device "nvmf_tgt_br2" 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:21.601 Cannot find device "nvmf_br" 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:21.601 Cannot find device "nvmf_init_if" 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:21.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:21.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.601 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:07:21.602 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:21.602 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:21.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:07:21.859 00:07:21.859 --- 10.0.0.2 ping statistics --- 00:07:21.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.859 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:21.859 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:21.859 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:07:21.859 00:07:21.859 --- 10.0.0.3 ping statistics --- 00:07:21.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.859 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:21.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:07:21.859 00:07:21.859 --- 10.0.0.1 ping statistics --- 00:07:21.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.859 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.859 15:30:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64813 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64813 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 64813 ']' 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.860 15:30:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 15:30:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.234 15:30:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:23.234 15:30:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:23.234 15:30:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.234 15:30:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.234 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.235 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.235 15:30:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.235 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.235 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.235 15:30:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.235 15:30:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:23.235 15:30:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:33.201 Initializing NVMe Controllers 00:07:33.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:33.201 Initialization complete. Launching workers. 00:07:33.201 ======================================================== 00:07:33.201 Latency(us) 00:07:33.201 Device Information : IOPS MiB/s Average min max 00:07:33.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14620.90 57.11 4381.28 657.71 22324.28 00:07:33.201 ======================================================== 00:07:33.201 Total : 14620.90 57.11 4381.28 657.71 22324.28 00:07:33.201 00:07:33.201 15:30:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:33.201 15:30:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:33.201 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.201 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.484 rmmod nvme_tcp 00:07:33.484 rmmod nvme_fabrics 00:07:33.484 rmmod nvme_keyring 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64813 ']' 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64813 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 64813 ']' 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 64813 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64813 00:07:33.484 killing process with pid 64813 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64813' 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 64813 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 64813 00:07:33.484 nvmf threads initialize successfully 00:07:33.484 bdev subsystem init successfully 00:07:33.484 created a nvmf target service 00:07:33.484 create targets's poll groups done 00:07:33.484 all subsystems of target started 00:07:33.484 nvmf target is running 00:07:33.484 all subsystems of target stopped 00:07:33.484 destroy targets's poll groups done 00:07:33.484 destroyed the nvmf target service 00:07:33.484 bdev subsystem finish successfully 00:07:33.484 nvmf threads destroy successfully 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.484 15:30:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:33.745 15:30:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:33.745 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:33.745 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.745 00:07:33.745 real 0m12.155s 00:07:33.745 user 0m43.961s 00:07:33.745 sys 0m1.844s 00:07:33.746 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.746 15:30:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.746 ************************************ 00:07:33.746 END TEST nvmf_example 00:07:33.746 ************************************ 00:07:33.746 15:30:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:33.746 15:30:28 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:33.746 15:30:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.746 15:30:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.746 15:30:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.746 ************************************ 00:07:33.746 START TEST nvmf_filesystem 00:07:33.746 ************************************ 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:33.746 * Looking for test storage... 00:07:33.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:33.746 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:33.746 #define SPDK_CONFIG_H 00:07:33.746 #define SPDK_CONFIG_APPS 1 00:07:33.746 #define SPDK_CONFIG_ARCH native 00:07:33.746 #undef SPDK_CONFIG_ASAN 00:07:33.746 #define SPDK_CONFIG_AVAHI 1 00:07:33.746 #undef SPDK_CONFIG_CET 00:07:33.746 #define SPDK_CONFIG_COVERAGE 1 00:07:33.746 #define SPDK_CONFIG_CROSS_PREFIX 00:07:33.746 #undef SPDK_CONFIG_CRYPTO 00:07:33.746 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:33.746 #undef SPDK_CONFIG_CUSTOMOCF 00:07:33.747 #undef SPDK_CONFIG_DAOS 00:07:33.747 #define SPDK_CONFIG_DAOS_DIR 00:07:33.747 #define SPDK_CONFIG_DEBUG 1 00:07:33.747 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:33.747 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:33.747 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:33.747 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:33.747 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:33.747 #undef SPDK_CONFIG_DPDK_UADK 00:07:33.747 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:33.747 #define SPDK_CONFIG_EXAMPLES 1 00:07:33.747 #undef SPDK_CONFIG_FC 00:07:33.747 #define SPDK_CONFIG_FC_PATH 00:07:33.747 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:33.747 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:33.747 #undef SPDK_CONFIG_FUSE 00:07:33.747 #undef SPDK_CONFIG_FUZZER 00:07:33.747 #define SPDK_CONFIG_FUZZER_LIB 00:07:33.747 #define SPDK_CONFIG_GOLANG 1 00:07:33.747 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:33.747 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:33.747 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:33.747 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:33.747 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:33.747 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:33.747 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:33.747 #define SPDK_CONFIG_IDXD 1 00:07:33.747 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:33.747 #undef SPDK_CONFIG_IPSEC_MB 00:07:33.747 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:33.747 #define SPDK_CONFIG_ISAL 1 00:07:33.747 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:33.747 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:33.747 #define SPDK_CONFIG_LIBDIR 00:07:33.747 #undef SPDK_CONFIG_LTO 00:07:33.747 #define SPDK_CONFIG_MAX_LCORES 128 00:07:33.747 #define SPDK_CONFIG_NVME_CUSE 1 00:07:33.747 #undef SPDK_CONFIG_OCF 00:07:33.747 #define SPDK_CONFIG_OCF_PATH 00:07:33.747 #define SPDK_CONFIG_OPENSSL_PATH 00:07:33.747 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:33.747 #define SPDK_CONFIG_PGO_DIR 00:07:33.747 #undef SPDK_CONFIG_PGO_USE 00:07:33.747 #define SPDK_CONFIG_PREFIX /usr/local 00:07:33.747 #undef SPDK_CONFIG_RAID5F 00:07:33.747 #undef SPDK_CONFIG_RBD 00:07:33.747 #define SPDK_CONFIG_RDMA 1 00:07:33.747 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:33.747 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:33.747 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:33.747 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:33.747 #define SPDK_CONFIG_SHARED 1 00:07:33.747 #undef SPDK_CONFIG_SMA 00:07:33.747 #define SPDK_CONFIG_TESTS 1 00:07:33.747 #undef SPDK_CONFIG_TSAN 00:07:33.747 #define SPDK_CONFIG_UBLK 1 00:07:33.747 #define SPDK_CONFIG_UBSAN 1 00:07:33.747 #undef SPDK_CONFIG_UNIT_TESTS 00:07:33.747 #undef SPDK_CONFIG_URING 00:07:33.747 #define SPDK_CONFIG_URING_PATH 00:07:33.747 #undef SPDK_CONFIG_URING_ZNS 00:07:33.747 #define SPDK_CONFIG_USDT 1 00:07:33.747 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:33.747 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:33.747 #undef SPDK_CONFIG_VFIO_USER 00:07:33.747 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:33.747 #define SPDK_CONFIG_VHOST 1 00:07:33.747 #define SPDK_CONFIG_VIRTIO 1 00:07:33.747 #undef SPDK_CONFIG_VTUNE 00:07:33.747 #define SPDK_CONFIG_VTUNE_DIR 00:07:33.747 #define SPDK_CONFIG_WERROR 1 00:07:33.747 #define SPDK_CONFIG_WPDK_DIR 00:07:33.747 #undef SPDK_CONFIG_XNVME 00:07:33.747 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:33.747 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:33.748 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65054 ]] 00:07:33.749 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65054 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.34Tznw 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.34Tznw/tests/target /tmp/spdk.34Tznw 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:34.008 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13787537408 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5242314752 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13787537408 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5242314752 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267748352 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=139264 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=96573374464 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3129405440 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:34.009 * Looking for test storage... 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13787537408 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:34.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.009 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:34.010 Cannot find device "nvmf_tgt_br" 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:07:34.010 15:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:34.010 Cannot find device "nvmf_tgt_br2" 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:34.010 Cannot find device "nvmf_tgt_br" 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:34.010 Cannot find device "nvmf_tgt_br2" 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:34.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:34.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:34.010 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:34.269 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:34.269 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:34.269 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:34.269 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:34.269 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:34.269 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:34.269 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:34.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:07:34.270 00:07:34.270 --- 10.0.0.2 ping statistics --- 00:07:34.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.270 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:34.270 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.270 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:07:34.270 00:07:34.270 --- 10.0.0.3 ping statistics --- 00:07:34.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.270 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:34.270 00:07:34.270 --- 10.0.0.1 ping statistics --- 00:07:34.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.270 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.270 ************************************ 00:07:34.270 START TEST nvmf_filesystem_no_in_capsule 00:07:34.270 ************************************ 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65210 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65210 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65210 ']' 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.270 15:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.270 [2024-07-15 15:30:29.397194] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:34.270 [2024-07-15 15:30:29.397297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.529 [2024-07-15 15:30:29.539846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.529 [2024-07-15 15:30:29.615227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.529 [2024-07-15 15:30:29.615499] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.529 [2024-07-15 15:30:29.615687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.529 [2024-07-15 15:30:29.615833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.529 [2024-07-15 15:30:29.615883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.529 [2024-07-15 15:30:29.616121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.529 [2024-07-15 15:30:29.616178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.529 [2024-07-15 15:30:29.617426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.529 [2024-07-15 15:30:29.617502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.464 [2024-07-15 15:30:30.416460] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.464 Malloc1 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.464 [2024-07-15 15:30:30.543615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:35.464 { 00:07:35.464 "aliases": [ 00:07:35.464 "685e41e6-1ed9-4f3a-bfe0-357f1423e704" 00:07:35.464 ], 00:07:35.464 "assigned_rate_limits": { 00:07:35.464 "r_mbytes_per_sec": 0, 00:07:35.464 "rw_ios_per_sec": 0, 00:07:35.464 "rw_mbytes_per_sec": 0, 00:07:35.464 "w_mbytes_per_sec": 0 00:07:35.464 }, 00:07:35.464 "block_size": 512, 00:07:35.464 "claim_type": "exclusive_write", 00:07:35.464 "claimed": true, 00:07:35.464 "driver_specific": {}, 00:07:35.464 "memory_domains": [ 00:07:35.464 { 00:07:35.464 "dma_device_id": "system", 00:07:35.464 "dma_device_type": 1 00:07:35.464 }, 00:07:35.464 { 00:07:35.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.464 "dma_device_type": 2 00:07:35.464 } 00:07:35.464 ], 00:07:35.464 "name": "Malloc1", 00:07:35.464 "num_blocks": 1048576, 00:07:35.464 "product_name": "Malloc disk", 00:07:35.464 "supported_io_types": { 00:07:35.464 "abort": true, 00:07:35.464 "compare": false, 00:07:35.464 "compare_and_write": false, 00:07:35.464 "copy": true, 00:07:35.464 "flush": true, 00:07:35.464 "get_zone_info": false, 00:07:35.464 "nvme_admin": false, 00:07:35.464 "nvme_io": false, 00:07:35.464 "nvme_io_md": false, 00:07:35.464 "nvme_iov_md": false, 00:07:35.464 "read": true, 00:07:35.464 "reset": true, 00:07:35.464 "seek_data": false, 00:07:35.464 "seek_hole": false, 00:07:35.464 "unmap": true, 00:07:35.464 "write": true, 00:07:35.464 "write_zeroes": true, 00:07:35.464 "zcopy": true, 00:07:35.464 "zone_append": false, 00:07:35.464 "zone_management": false 00:07:35.464 }, 00:07:35.464 "uuid": "685e41e6-1ed9-4f3a-bfe0-357f1423e704", 00:07:35.464 "zoned": false 00:07:35.464 } 00:07:35.464 ]' 00:07:35.464 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:35.722 15:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:38.255 15:30:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:38.255 15:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.185 ************************************ 00:07:39.185 START TEST filesystem_ext4 00:07:39.185 ************************************ 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:39.185 mke2fs 1.46.5 (30-Dec-2021) 00:07:39.185 Discarding device blocks: 0/522240 done 00:07:39.185 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:39.185 Filesystem UUID: 6e1d239e-4244-4a72-8738-6fbc22e20481 00:07:39.185 Superblock backups stored on blocks: 00:07:39.185 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:39.185 00:07:39.185 Allocating group tables: 0/64 done 00:07:39.185 Writing inode tables: 0/64 done 00:07:39.185 Creating journal (8192 blocks): done 00:07:39.185 Writing superblocks and filesystem accounting information: 0/64 done 00:07:39.185 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.185 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65210 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:39.442 00:07:39.442 real 0m0.339s 00:07:39.442 user 0m0.022s 00:07:39.442 sys 0m0.047s 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.442 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:39.443 ************************************ 00:07:39.443 END TEST filesystem_ext4 00:07:39.443 ************************************ 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.443 ************************************ 00:07:39.443 START TEST filesystem_btrfs 00:07:39.443 ************************************ 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:39.443 btrfs-progs v6.6.2 00:07:39.443 See https://btrfs.readthedocs.io for more information. 00:07:39.443 00:07:39.443 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:39.443 NOTE: several default settings have changed in version 5.15, please make sure 00:07:39.443 this does not affect your deployments: 00:07:39.443 - DUP for metadata (-m dup) 00:07:39.443 - enabled no-holes (-O no-holes) 00:07:39.443 - enabled free-space-tree (-R free-space-tree) 00:07:39.443 00:07:39.443 Label: (null) 00:07:39.443 UUID: 19da8240-135c-44b0-9817-02a23da49797 00:07:39.443 Node size: 16384 00:07:39.443 Sector size: 4096 00:07:39.443 Filesystem size: 510.00MiB 00:07:39.443 Block group profiles: 00:07:39.443 Data: single 8.00MiB 00:07:39.443 Metadata: DUP 32.00MiB 00:07:39.443 System: DUP 8.00MiB 00:07:39.443 SSD detected: yes 00:07:39.443 Zoned device: no 00:07:39.443 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:39.443 Runtime features: free-space-tree 00:07:39.443 Checksum: crc32c 00:07:39.443 Number of devices: 1 00:07:39.443 Devices: 00:07:39.443 ID SIZE PATH 00:07:39.443 1 510.00MiB /dev/nvme0n1p1 00:07:39.443 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.443 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65210 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:39.703 00:07:39.703 real 0m0.184s 00:07:39.703 user 0m0.027s 00:07:39.703 sys 0m0.050s 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:39.703 ************************************ 00:07:39.703 END TEST filesystem_btrfs 00:07:39.703 ************************************ 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.703 ************************************ 00:07:39.703 START TEST filesystem_xfs 00:07:39.703 ************************************ 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:39.703 15:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:39.703 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:39.703 = sectsz=512 attr=2, projid32bit=1 00:07:39.703 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:39.703 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:39.703 data = bsize=4096 blocks=130560, imaxpct=25 00:07:39.703 = sunit=0 swidth=0 blks 00:07:39.703 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:39.703 log =internal log bsize=4096 blocks=16384, version=2 00:07:39.703 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:39.703 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:40.268 Discarding blocks...Done. 00:07:40.268 15:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:40.268 15:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65210 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.834 00:07:42.834 real 0m3.096s 00:07:42.834 user 0m0.020s 00:07:42.834 sys 0m0.051s 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:42.834 ************************************ 00:07:42.834 END TEST filesystem_xfs 00:07:42.834 ************************************ 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65210 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65210 ']' 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65210 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65210 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.834 killing process with pid 65210 00:07:42.834 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65210' 00:07:42.835 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65210 00:07:42.835 15:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65210 00:07:43.091 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:43.091 00:07:43.091 real 0m8.870s 00:07:43.091 user 0m33.537s 00:07:43.091 sys 0m1.467s 00:07:43.091 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.091 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.091 ************************************ 00:07:43.091 END TEST nvmf_filesystem_no_in_capsule 00:07:43.091 ************************************ 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 ************************************ 00:07:43.347 START TEST nvmf_filesystem_in_capsule 00:07:43.347 ************************************ 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65523 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65523 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65523 ']' 00:07:43.347 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.348 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.348 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.348 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.348 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.348 15:30:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.348 [2024-07-15 15:30:38.310333] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:43.348 [2024-07-15 15:30:38.310432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.348 [2024-07-15 15:30:38.451035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.605 [2024-07-15 15:30:38.522799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.605 [2024-07-15 15:30:38.522879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.605 [2024-07-15 15:30:38.522894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.605 [2024-07-15 15:30:38.522904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.605 [2024-07-15 15:30:38.522918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.605 [2024-07-15 15:30:38.523587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.605 [2024-07-15 15:30:38.523675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.605 [2024-07-15 15:30:38.523786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.605 [2024-07-15 15:30:38.523789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.538 [2024-07-15 15:30:39.366216] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.538 Malloc1 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.538 [2024-07-15 15:30:39.493630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.538 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:44.538 { 00:07:44.538 "aliases": [ 00:07:44.538 "f4784ff1-e6b0-44f0-8099-67c6cdab65d2" 00:07:44.538 ], 00:07:44.538 "assigned_rate_limits": { 00:07:44.538 "r_mbytes_per_sec": 0, 00:07:44.539 "rw_ios_per_sec": 0, 00:07:44.539 "rw_mbytes_per_sec": 0, 00:07:44.539 "w_mbytes_per_sec": 0 00:07:44.539 }, 00:07:44.539 "block_size": 512, 00:07:44.539 "claim_type": "exclusive_write", 00:07:44.539 "claimed": true, 00:07:44.539 "driver_specific": {}, 00:07:44.539 "memory_domains": [ 00:07:44.539 { 00:07:44.539 "dma_device_id": "system", 00:07:44.539 "dma_device_type": 1 00:07:44.539 }, 00:07:44.539 { 00:07:44.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.539 "dma_device_type": 2 00:07:44.539 } 00:07:44.539 ], 00:07:44.539 "name": "Malloc1", 00:07:44.539 "num_blocks": 1048576, 00:07:44.539 "product_name": "Malloc disk", 00:07:44.539 "supported_io_types": { 00:07:44.539 "abort": true, 00:07:44.539 "compare": false, 00:07:44.539 "compare_and_write": false, 00:07:44.539 "copy": true, 00:07:44.539 "flush": true, 00:07:44.539 "get_zone_info": false, 00:07:44.539 "nvme_admin": false, 00:07:44.539 "nvme_io": false, 00:07:44.539 "nvme_io_md": false, 00:07:44.539 "nvme_iov_md": false, 00:07:44.539 "read": true, 00:07:44.539 "reset": true, 00:07:44.539 "seek_data": false, 00:07:44.539 "seek_hole": false, 00:07:44.539 "unmap": true, 00:07:44.539 "write": true, 00:07:44.539 "write_zeroes": true, 00:07:44.539 "zcopy": true, 00:07:44.539 "zone_append": false, 00:07:44.539 "zone_management": false 00:07:44.539 }, 00:07:44.539 "uuid": "f4784ff1-e6b0-44f0-8099-67c6cdab65d2", 00:07:44.539 "zoned": false 00:07:44.539 } 00:07:44.539 ]' 00:07:44.539 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:44.539 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:44.539 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:44.539 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:44.539 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:44.539 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:44.539 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:44.539 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:44.797 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:44.797 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:44.797 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:44.797 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:44.797 15:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:46.695 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:46.695 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:46.695 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:46.695 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:46.695 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:46.695 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:46.695 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:46.695 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:46.953 15:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.887 ************************************ 00:07:47.887 START TEST filesystem_in_capsule_ext4 00:07:47.887 ************************************ 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:47.887 15:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:47.888 mke2fs 1.46.5 (30-Dec-2021) 00:07:47.888 Discarding device blocks: 0/522240 done 00:07:47.888 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:47.888 Filesystem UUID: 05ad3328-03b3-4fda-a905-107394c3f926 00:07:47.888 Superblock backups stored on blocks: 00:07:47.888 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:47.888 00:07:47.888 Allocating group tables: 0/64 done 00:07:47.888 Writing inode tables: 0/64 done 00:07:47.888 Creating journal (8192 blocks): done 00:07:47.888 Writing superblocks and filesystem accounting information: 0/64 done 00:07:47.888 00:07:47.888 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:47.888 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65523 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.146 ************************************ 00:07:48.146 END TEST filesystem_in_capsule_ext4 00:07:48.146 ************************************ 00:07:48.146 00:07:48.146 real 0m0.291s 00:07:48.146 user 0m0.021s 00:07:48.146 sys 0m0.049s 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.146 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.146 ************************************ 00:07:48.146 START TEST filesystem_in_capsule_btrfs 00:07:48.146 ************************************ 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:48.404 btrfs-progs v6.6.2 00:07:48.404 See https://btrfs.readthedocs.io for more information. 00:07:48.404 00:07:48.404 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:48.404 NOTE: several default settings have changed in version 5.15, please make sure 00:07:48.404 this does not affect your deployments: 00:07:48.404 - DUP for metadata (-m dup) 00:07:48.404 - enabled no-holes (-O no-holes) 00:07:48.404 - enabled free-space-tree (-R free-space-tree) 00:07:48.404 00:07:48.404 Label: (null) 00:07:48.404 UUID: 6c25d847-c7dc-4840-99df-686bdcb89c20 00:07:48.404 Node size: 16384 00:07:48.404 Sector size: 4096 00:07:48.404 Filesystem size: 510.00MiB 00:07:48.404 Block group profiles: 00:07:48.404 Data: single 8.00MiB 00:07:48.404 Metadata: DUP 32.00MiB 00:07:48.404 System: DUP 8.00MiB 00:07:48.404 SSD detected: yes 00:07:48.404 Zoned device: no 00:07:48.404 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:48.404 Runtime features: free-space-tree 00:07:48.404 Checksum: crc32c 00:07:48.404 Number of devices: 1 00:07:48.404 Devices: 00:07:48.404 ID SIZE PATH 00:07:48.404 1 510.00MiB /dev/nvme0n1p1 00:07:48.404 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65523 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.404 ************************************ 00:07:48.404 END TEST filesystem_in_capsule_btrfs 00:07:48.404 ************************************ 00:07:48.404 00:07:48.404 real 0m0.171s 00:07:48.404 user 0m0.018s 00:07:48.404 sys 0m0.062s 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.404 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.405 ************************************ 00:07:48.405 START TEST filesystem_in_capsule_xfs 00:07:48.405 ************************************ 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:48.405 15:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:48.663 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:48.663 = sectsz=512 attr=2, projid32bit=1 00:07:48.663 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:48.663 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:48.663 data = bsize=4096 blocks=130560, imaxpct=25 00:07:48.663 = sunit=0 swidth=0 blks 00:07:48.663 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:48.663 log =internal log bsize=4096 blocks=16384, version=2 00:07:48.663 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:48.663 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:49.230 Discarding blocks...Done. 00:07:49.230 15:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:49.230 15:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.156 15:30:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.156 15:30:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:51.156 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.156 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:51.156 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:51.156 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.156 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65523 00:07:51.156 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.156 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.156 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.156 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.156 ************************************ 00:07:51.156 END TEST filesystem_in_capsule_xfs 00:07:51.156 ************************************ 00:07:51.156 00:07:51.156 real 0m2.540s 00:07:51.156 user 0m0.016s 00:07:51.156 sys 0m0.057s 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65523 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65523 ']' 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65523 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65523 00:07:51.157 killing process with pid 65523 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65523' 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65523 00:07:51.157 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65523 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:51.430 00:07:51.430 real 0m8.217s 00:07:51.430 user 0m31.056s 00:07:51.430 sys 0m1.420s 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.430 ************************************ 00:07:51.430 END TEST nvmf_filesystem_in_capsule 00:07:51.430 ************************************ 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.430 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.430 rmmod nvme_tcp 00:07:51.430 rmmod nvme_fabrics 00:07:51.689 rmmod nvme_keyring 00:07:51.689 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.689 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:51.689 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:51.689 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:51.689 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.689 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.690 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.690 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.690 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.690 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.690 15:30:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.690 15:30:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.690 15:30:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:51.690 00:07:51.690 real 0m17.938s 00:07:51.690 user 1m4.837s 00:07:51.690 sys 0m3.272s 00:07:51.690 15:30:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.690 ************************************ 00:07:51.690 15:30:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.690 END TEST nvmf_filesystem 00:07:51.690 ************************************ 00:07:51.690 15:30:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:51.690 15:30:46 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:51.690 15:30:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.690 15:30:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.690 15:30:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.690 ************************************ 00:07:51.690 START TEST nvmf_target_discovery 00:07:51.690 ************************************ 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:51.690 * Looking for test storage... 00:07:51.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.690 15:30:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:51.691 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:51.950 Cannot find device "nvmf_tgt_br" 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.950 Cannot find device "nvmf_tgt_br2" 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:51.950 Cannot find device "nvmf_tgt_br" 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:51.950 Cannot find device "nvmf_tgt_br2" 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:51.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:51.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:51.950 15:30:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:51.950 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:51.950 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:51.950 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:51.950 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:51.950 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:51.950 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:51.950 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:51.950 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:51.951 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:51.951 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:51.951 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:52.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:07:52.210 00:07:52.210 --- 10.0.0.2 ping statistics --- 00:07:52.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.210 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:52.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:52.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:07:52.210 00:07:52.210 --- 10.0.0.3 ping statistics --- 00:07:52.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.210 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:52.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:52.210 00:07:52.210 --- 10.0.0.1 ping statistics --- 00:07:52.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.210 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=65972 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 65972 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 65972 ']' 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.210 15:30:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.210 [2024-07-15 15:30:47.231700] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:52.210 [2024-07-15 15:30:47.231836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.469 [2024-07-15 15:30:47.374598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.469 [2024-07-15 15:30:47.448070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.469 [2024-07-15 15:30:47.448130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.469 [2024-07-15 15:30:47.448152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.469 [2024-07-15 15:30:47.448168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.469 [2024-07-15 15:30:47.448180] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.469 [2024-07-15 15:30:47.448381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.469 [2024-07-15 15:30:47.448597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.469 [2024-07-15 15:30:47.449352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.469 [2024-07-15 15:30:47.449370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.406 [2024-07-15 15:30:48.324098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.406 Null1 00:07:53.406 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 [2024-07-15 15:30:48.378097] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 Null2 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 Null3 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 Null4 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.407 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 4420 00:07:53.667 00:07:53.667 Discovery Log Number of Records 6, Generation counter 6 00:07:53.667 =====Discovery Log Entry 0====== 00:07:53.667 trtype: tcp 00:07:53.667 adrfam: ipv4 00:07:53.667 subtype: current discovery subsystem 00:07:53.667 treq: not required 00:07:53.667 portid: 0 00:07:53.667 trsvcid: 4420 00:07:53.667 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.667 traddr: 10.0.0.2 00:07:53.667 eflags: explicit discovery connections, duplicate discovery information 00:07:53.667 sectype: none 00:07:53.667 =====Discovery Log Entry 1====== 00:07:53.667 trtype: tcp 00:07:53.667 adrfam: ipv4 00:07:53.667 subtype: nvme subsystem 00:07:53.667 treq: not required 00:07:53.667 portid: 0 00:07:53.667 trsvcid: 4420 00:07:53.667 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:53.667 traddr: 10.0.0.2 00:07:53.667 eflags: none 00:07:53.667 sectype: none 00:07:53.667 =====Discovery Log Entry 2====== 00:07:53.667 trtype: tcp 00:07:53.667 adrfam: ipv4 00:07:53.667 subtype: nvme subsystem 00:07:53.667 treq: not required 00:07:53.667 portid: 0 00:07:53.667 trsvcid: 4420 00:07:53.667 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:53.667 traddr: 10.0.0.2 00:07:53.667 eflags: none 00:07:53.667 sectype: none 00:07:53.667 =====Discovery Log Entry 3====== 00:07:53.667 trtype: tcp 00:07:53.667 adrfam: ipv4 00:07:53.667 subtype: nvme subsystem 00:07:53.667 treq: not required 00:07:53.667 portid: 0 00:07:53.667 trsvcid: 4420 00:07:53.667 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:53.667 traddr: 10.0.0.2 00:07:53.667 eflags: none 00:07:53.667 sectype: none 00:07:53.667 =====Discovery Log Entry 4====== 00:07:53.667 trtype: tcp 00:07:53.667 adrfam: ipv4 00:07:53.667 subtype: nvme subsystem 00:07:53.667 treq: not required 00:07:53.667 portid: 0 00:07:53.667 trsvcid: 4420 00:07:53.667 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:53.667 traddr: 10.0.0.2 00:07:53.667 eflags: none 00:07:53.667 sectype: none 00:07:53.667 =====Discovery Log Entry 5====== 00:07:53.667 trtype: tcp 00:07:53.667 adrfam: ipv4 00:07:53.667 subtype: discovery subsystem referral 00:07:53.667 treq: not required 00:07:53.667 portid: 0 00:07:53.667 trsvcid: 4430 00:07:53.667 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.667 traddr: 10.0.0.2 00:07:53.667 eflags: none 00:07:53.667 sectype: none 00:07:53.667 Perform nvmf subsystem discovery via RPC 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.667 [ 00:07:53.667 { 00:07:53.667 "allow_any_host": true, 00:07:53.667 "hosts": [], 00:07:53.667 "listen_addresses": [ 00:07:53.667 { 00:07:53.667 "adrfam": "IPv4", 00:07:53.667 "traddr": "10.0.0.2", 00:07:53.667 "trsvcid": "4420", 00:07:53.667 "trtype": "TCP" 00:07:53.667 } 00:07:53.667 ], 00:07:53.667 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:53.667 "subtype": "Discovery" 00:07:53.667 }, 00:07:53.667 { 00:07:53.667 "allow_any_host": true, 00:07:53.667 "hosts": [], 00:07:53.667 "listen_addresses": [ 00:07:53.667 { 00:07:53.667 "adrfam": "IPv4", 00:07:53.667 "traddr": "10.0.0.2", 00:07:53.667 "trsvcid": "4420", 00:07:53.667 "trtype": "TCP" 00:07:53.667 } 00:07:53.667 ], 00:07:53.667 "max_cntlid": 65519, 00:07:53.667 "max_namespaces": 32, 00:07:53.667 "min_cntlid": 1, 00:07:53.667 "model_number": "SPDK bdev Controller", 00:07:53.667 "namespaces": [ 00:07:53.667 { 00:07:53.667 "bdev_name": "Null1", 00:07:53.667 "name": "Null1", 00:07:53.667 "nguid": "EA08535788554E10AF10C5E0BBE872C1", 00:07:53.667 "nsid": 1, 00:07:53.667 "uuid": "ea085357-8855-4e10-af10-c5e0bbe872c1" 00:07:53.667 } 00:07:53.667 ], 00:07:53.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.667 "serial_number": "SPDK00000000000001", 00:07:53.667 "subtype": "NVMe" 00:07:53.667 }, 00:07:53.667 { 00:07:53.667 "allow_any_host": true, 00:07:53.667 "hosts": [], 00:07:53.667 "listen_addresses": [ 00:07:53.667 { 00:07:53.667 "adrfam": "IPv4", 00:07:53.667 "traddr": "10.0.0.2", 00:07:53.667 "trsvcid": "4420", 00:07:53.667 "trtype": "TCP" 00:07:53.667 } 00:07:53.667 ], 00:07:53.667 "max_cntlid": 65519, 00:07:53.667 "max_namespaces": 32, 00:07:53.667 "min_cntlid": 1, 00:07:53.667 "model_number": "SPDK bdev Controller", 00:07:53.667 "namespaces": [ 00:07:53.667 { 00:07:53.667 "bdev_name": "Null2", 00:07:53.667 "name": "Null2", 00:07:53.667 "nguid": "C2E55995DE8D45199548958AB0351392", 00:07:53.667 "nsid": 1, 00:07:53.667 "uuid": "c2e55995-de8d-4519-9548-958ab0351392" 00:07:53.667 } 00:07:53.667 ], 00:07:53.667 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:53.667 "serial_number": "SPDK00000000000002", 00:07:53.667 "subtype": "NVMe" 00:07:53.667 }, 00:07:53.667 { 00:07:53.667 "allow_any_host": true, 00:07:53.667 "hosts": [], 00:07:53.667 "listen_addresses": [ 00:07:53.667 { 00:07:53.667 "adrfam": "IPv4", 00:07:53.667 "traddr": "10.0.0.2", 00:07:53.667 "trsvcid": "4420", 00:07:53.667 "trtype": "TCP" 00:07:53.667 } 00:07:53.667 ], 00:07:53.667 "max_cntlid": 65519, 00:07:53.667 "max_namespaces": 32, 00:07:53.667 "min_cntlid": 1, 00:07:53.667 "model_number": "SPDK bdev Controller", 00:07:53.667 "namespaces": [ 00:07:53.667 { 00:07:53.667 "bdev_name": "Null3", 00:07:53.667 "name": "Null3", 00:07:53.667 "nguid": "49691823BFC4496AB1D029DD7B76EB8C", 00:07:53.667 "nsid": 1, 00:07:53.667 "uuid": "49691823-bfc4-496a-b1d0-29dd7b76eb8c" 00:07:53.667 } 00:07:53.667 ], 00:07:53.667 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:53.667 "serial_number": "SPDK00000000000003", 00:07:53.667 "subtype": "NVMe" 00:07:53.667 }, 00:07:53.667 { 00:07:53.667 "allow_any_host": true, 00:07:53.667 "hosts": [], 00:07:53.667 "listen_addresses": [ 00:07:53.667 { 00:07:53.667 "adrfam": "IPv4", 00:07:53.667 "traddr": "10.0.0.2", 00:07:53.667 "trsvcid": "4420", 00:07:53.667 "trtype": "TCP" 00:07:53.667 } 00:07:53.667 ], 00:07:53.667 "max_cntlid": 65519, 00:07:53.667 "max_namespaces": 32, 00:07:53.667 "min_cntlid": 1, 00:07:53.667 "model_number": "SPDK bdev Controller", 00:07:53.667 "namespaces": [ 00:07:53.667 { 00:07:53.667 "bdev_name": "Null4", 00:07:53.667 "name": "Null4", 00:07:53.667 "nguid": "3401F51987A64444BB54C94FE65083EC", 00:07:53.667 "nsid": 1, 00:07:53.667 "uuid": "3401f519-87a6-4444-bb54-c94fe65083ec" 00:07:53.667 } 00:07:53.667 ], 00:07:53.667 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:53.667 "serial_number": "SPDK00000000000004", 00:07:53.667 "subtype": "NVMe" 00:07:53.667 } 00:07:53.667 ] 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.667 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.668 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.668 rmmod nvme_tcp 00:07:53.668 rmmod nvme_fabrics 00:07:53.668 rmmod nvme_keyring 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 65972 ']' 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 65972 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 65972 ']' 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 65972 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65972 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:53.926 killing process with pid 65972 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65972' 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 65972 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 65972 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.926 15:30:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.926 15:30:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:53.926 00:07:53.926 real 0m2.334s 00:07:53.926 user 0m6.499s 00:07:53.926 sys 0m0.594s 00:07:53.926 15:30:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.926 15:30:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.926 ************************************ 00:07:53.926 END TEST nvmf_target_discovery 00:07:53.926 ************************************ 00:07:54.185 15:30:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:54.185 15:30:49 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:54.185 15:30:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.185 15:30:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.185 15:30:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.185 ************************************ 00:07:54.185 START TEST nvmf_referrals 00:07:54.185 ************************************ 00:07:54.185 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:54.185 * Looking for test storage... 00:07:54.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.185 15:30:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.185 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:54.185 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.185 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.185 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.185 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.185 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.185 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:54.186 Cannot find device "nvmf_tgt_br" 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.186 Cannot find device "nvmf_tgt_br2" 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:54.186 Cannot find device "nvmf_tgt_br" 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:54.186 Cannot find device "nvmf_tgt_br2" 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:54.186 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:54.187 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.187 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:54.187 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.187 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:54.187 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:54.187 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:54.187 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:54.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:07:54.446 00:07:54.446 --- 10.0.0.2 ping statistics --- 00:07:54.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.446 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:54.446 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:54.446 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:54.446 00:07:54.446 --- 10.0.0.3 ping statistics --- 00:07:54.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.446 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:54.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:54.446 00:07:54.446 --- 10.0.0.1 ping statistics --- 00:07:54.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.446 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66198 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66198 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66198 ']' 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.446 15:30:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.705 [2024-07-15 15:30:49.615317] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:54.705 [2024-07-15 15:30:49.615481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.705 [2024-07-15 15:30:49.758897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.705 [2024-07-15 15:30:49.833442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.705 [2024-07-15 15:30:49.833503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.705 [2024-07-15 15:30:49.833517] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.705 [2024-07-15 15:30:49.833542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.705 [2024-07-15 15:30:49.833552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.963 [2024-07-15 15:30:49.833931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.963 [2024-07-15 15:30:49.834159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.964 [2024-07-15 15:30:49.834252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.964 [2024-07-15 15:30:49.834239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.529 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.529 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:55.529 15:30:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.529 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.529 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.529 15:30:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.529 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.529 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.530 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.530 [2024-07-15 15:30:50.656373] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.787 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.787 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:55.787 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.788 [2024-07-15 15:30:50.684825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.788 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:56.046 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.047 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.047 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.047 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.047 15:30:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:56.047 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.047 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.047 15:30:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.047 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:56.306 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.565 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.824 rmmod nvme_tcp 00:07:56.824 rmmod nvme_fabrics 00:07:56.824 rmmod nvme_keyring 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66198 ']' 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66198 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66198 ']' 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66198 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66198 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.824 killing process with pid 66198 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66198' 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66198 00:07:56.824 15:30:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66198 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:57.082 00:07:57.082 real 0m3.067s 00:07:57.082 user 0m9.900s 00:07:57.082 sys 0m0.800s 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.082 15:30:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.082 ************************************ 00:07:57.082 END TEST nvmf_referrals 00:07:57.082 ************************************ 00:07:57.082 15:30:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:57.082 15:30:52 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:57.082 15:30:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.082 15:30:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.082 15:30:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.082 ************************************ 00:07:57.082 START TEST nvmf_connect_disconnect 00:07:57.082 ************************************ 00:07:57.082 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:57.340 * Looking for test storage... 00:07:57.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:57.340 Cannot find device "nvmf_tgt_br" 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:57.340 Cannot find device "nvmf_tgt_br2" 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:57.340 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:57.341 Cannot find device "nvmf_tgt_br" 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:57.341 Cannot find device "nvmf_tgt_br2" 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:57.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:57.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:57.341 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:57.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:07:57.599 00:07:57.599 --- 10.0.0.2 ping statistics --- 00:07:57.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.599 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:57.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:57.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:07:57.599 00:07:57.599 --- 10.0.0.3 ping statistics --- 00:07:57.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.599 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:57.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:07:57.599 00:07:57.599 --- 10.0.0.1 ping statistics --- 00:07:57.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.599 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66503 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66503 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66503 ']' 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.599 15:30:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:57.857 [2024-07-15 15:30:52.730111] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:57.858 [2024-07-15 15:30:52.730214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.858 [2024-07-15 15:30:52.870188] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.858 [2024-07-15 15:30:52.942780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.858 [2024-07-15 15:30:52.942849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.858 [2024-07-15 15:30:52.942863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.858 [2024-07-15 15:30:52.942883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.858 [2024-07-15 15:30:52.942891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.858 [2024-07-15 15:30:52.943077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.858 [2024-07-15 15:30:52.943563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.858 [2024-07-15 15:30:52.943812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.858 [2024-07-15 15:30:52.943806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.116 [2024-07-15 15:30:53.076559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.116 [2024-07-15 15:30:53.140575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:58.116 15:30:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:00.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.568 rmmod nvme_tcp 00:08:09.568 rmmod nvme_fabrics 00:08:09.568 rmmod nvme_keyring 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66503 ']' 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66503 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66503 ']' 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66503 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66503 00:08:09.568 killing process with pid 66503 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66503' 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66503 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66503 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.568 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.829 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:09.829 00:08:09.829 real 0m12.514s 00:08:09.829 user 0m45.740s 00:08:09.829 sys 0m1.752s 00:08:09.829 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.829 ************************************ 00:08:09.829 END TEST nvmf_connect_disconnect 00:08:09.829 ************************************ 00:08:09.829 15:31:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 15:31:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:09.829 15:31:04 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:09.829 15:31:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.829 15:31:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.829 15:31:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 ************************************ 00:08:09.829 START TEST nvmf_multitarget 00:08:09.829 ************************************ 00:08:09.829 15:31:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:09.829 * Looking for test storage... 00:08:09.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.829 15:31:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.829 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:09.829 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.829 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.829 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.829 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:09.830 Cannot find device "nvmf_tgt_br" 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.830 Cannot find device "nvmf_tgt_br2" 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:09.830 Cannot find device "nvmf_tgt_br" 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:09.830 Cannot find device "nvmf_tgt_br2" 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:08:09.830 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:10.089 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:10.090 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.090 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:08:10.090 15:31:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:10.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:10.090 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:10.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:10.348 00:08:10.348 --- 10.0.0.2 ping statistics --- 00:08:10.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.348 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:10.348 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:10.348 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:10.348 00:08:10.348 --- 10.0.0.3 ping statistics --- 00:08:10.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.348 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:10.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:10.348 00:08:10.348 --- 10.0.0.1 ping statistics --- 00:08:10.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.348 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=66889 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 66889 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 66889 ']' 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.348 15:31:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:10.348 [2024-07-15 15:31:05.316551] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:10.348 [2024-07-15 15:31:05.316854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.348 [2024-07-15 15:31:05.460491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.607 [2024-07-15 15:31:05.529924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.607 [2024-07-15 15:31:05.529991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.607 [2024-07-15 15:31:05.530006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.607 [2024-07-15 15:31:05.530017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.607 [2024-07-15 15:31:05.530025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.607 [2024-07-15 15:31:05.530131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.607 [2024-07-15 15:31:05.530393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.607 [2024-07-15 15:31:05.531346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.607 [2024-07-15 15:31:05.531360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.173 15:31:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.173 15:31:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:11.173 15:31:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.173 15:31:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.173 15:31:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:11.431 15:31:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.431 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:11.432 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:11.432 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:11.432 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:11.432 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:11.690 "nvmf_tgt_1" 00:08:11.690 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:11.690 "nvmf_tgt_2" 00:08:11.690 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:11.690 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:11.949 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:11.949 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:11.949 true 00:08:11.949 15:31:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:12.207 true 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.207 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.207 rmmod nvme_tcp 00:08:12.207 rmmod nvme_fabrics 00:08:12.207 rmmod nvme_keyring 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 66889 ']' 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 66889 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 66889 ']' 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 66889 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66889 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:12.466 killing process with pid 66889 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66889' 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 66889 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 66889 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:12.466 ************************************ 00:08:12.466 END TEST nvmf_multitarget 00:08:12.466 ************************************ 00:08:12.466 00:08:12.466 real 0m2.819s 00:08:12.466 user 0m9.203s 00:08:12.466 sys 0m0.648s 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.466 15:31:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:12.724 15:31:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:12.724 15:31:07 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:12.724 15:31:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:12.724 15:31:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.724 15:31:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.724 ************************************ 00:08:12.724 START TEST nvmf_rpc 00:08:12.724 ************************************ 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:12.724 * Looking for test storage... 00:08:12.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.724 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:12.725 Cannot find device "nvmf_tgt_br" 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.725 Cannot find device "nvmf_tgt_br2" 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:12.725 Cannot find device "nvmf_tgt_br" 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:12.725 Cannot find device "nvmf_tgt_br2" 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.725 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.985 15:31:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:12.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:08:12.985 00:08:12.985 --- 10.0.0.2 ping statistics --- 00:08:12.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.985 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:12.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:08:12.985 00:08:12.985 --- 10.0.0.3 ping statistics --- 00:08:12.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.985 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:12.985 00:08:12.985 --- 10.0.0.1 ping statistics --- 00:08:12.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.985 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67126 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67126 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67126 ']' 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.985 15:31:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.985 [2024-07-15 15:31:08.107604] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:12.985 [2024-07-15 15:31:08.107699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.245 [2024-07-15 15:31:08.244010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.245 [2024-07-15 15:31:08.302727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.245 [2024-07-15 15:31:08.302773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.245 [2024-07-15 15:31:08.302792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.245 [2024-07-15 15:31:08.302800] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.245 [2024-07-15 15:31:08.302807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.245 [2024-07-15 15:31:08.302883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.245 [2024-07-15 15:31:08.303729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.245 [2024-07-15 15:31:08.303799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.245 [2024-07-15 15:31:08.303803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:14.180 "poll_groups": [ 00:08:14.180 { 00:08:14.180 "admin_qpairs": 0, 00:08:14.180 "completed_nvme_io": 0, 00:08:14.180 "current_admin_qpairs": 0, 00:08:14.180 "current_io_qpairs": 0, 00:08:14.180 "io_qpairs": 0, 00:08:14.180 "name": "nvmf_tgt_poll_group_000", 00:08:14.180 "pending_bdev_io": 0, 00:08:14.180 "transports": [] 00:08:14.180 }, 00:08:14.180 { 00:08:14.180 "admin_qpairs": 0, 00:08:14.180 "completed_nvme_io": 0, 00:08:14.180 "current_admin_qpairs": 0, 00:08:14.180 "current_io_qpairs": 0, 00:08:14.180 "io_qpairs": 0, 00:08:14.180 "name": "nvmf_tgt_poll_group_001", 00:08:14.180 "pending_bdev_io": 0, 00:08:14.180 "transports": [] 00:08:14.180 }, 00:08:14.180 { 00:08:14.180 "admin_qpairs": 0, 00:08:14.180 "completed_nvme_io": 0, 00:08:14.180 "current_admin_qpairs": 0, 00:08:14.180 "current_io_qpairs": 0, 00:08:14.180 "io_qpairs": 0, 00:08:14.180 "name": "nvmf_tgt_poll_group_002", 00:08:14.180 "pending_bdev_io": 0, 00:08:14.180 "transports": [] 00:08:14.180 }, 00:08:14.180 { 00:08:14.180 "admin_qpairs": 0, 00:08:14.180 "completed_nvme_io": 0, 00:08:14.180 "current_admin_qpairs": 0, 00:08:14.180 "current_io_qpairs": 0, 00:08:14.180 "io_qpairs": 0, 00:08:14.180 "name": "nvmf_tgt_poll_group_003", 00:08:14.180 "pending_bdev_io": 0, 00:08:14.180 "transports": [] 00:08:14.180 } 00:08:14.180 ], 00:08:14.180 "tick_rate": 2200000000 00:08:14.180 }' 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.180 [2024-07-15 15:31:09.276559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.180 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.438 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.438 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:14.438 "poll_groups": [ 00:08:14.438 { 00:08:14.438 "admin_qpairs": 0, 00:08:14.438 "completed_nvme_io": 0, 00:08:14.438 "current_admin_qpairs": 0, 00:08:14.438 "current_io_qpairs": 0, 00:08:14.438 "io_qpairs": 0, 00:08:14.438 "name": "nvmf_tgt_poll_group_000", 00:08:14.438 "pending_bdev_io": 0, 00:08:14.438 "transports": [ 00:08:14.438 { 00:08:14.438 "trtype": "TCP" 00:08:14.438 } 00:08:14.438 ] 00:08:14.438 }, 00:08:14.438 { 00:08:14.438 "admin_qpairs": 0, 00:08:14.438 "completed_nvme_io": 0, 00:08:14.438 "current_admin_qpairs": 0, 00:08:14.438 "current_io_qpairs": 0, 00:08:14.438 "io_qpairs": 0, 00:08:14.438 "name": "nvmf_tgt_poll_group_001", 00:08:14.438 "pending_bdev_io": 0, 00:08:14.438 "transports": [ 00:08:14.438 { 00:08:14.438 "trtype": "TCP" 00:08:14.438 } 00:08:14.438 ] 00:08:14.438 }, 00:08:14.438 { 00:08:14.438 "admin_qpairs": 0, 00:08:14.438 "completed_nvme_io": 0, 00:08:14.438 "current_admin_qpairs": 0, 00:08:14.438 "current_io_qpairs": 0, 00:08:14.438 "io_qpairs": 0, 00:08:14.438 "name": "nvmf_tgt_poll_group_002", 00:08:14.438 "pending_bdev_io": 0, 00:08:14.438 "transports": [ 00:08:14.438 { 00:08:14.438 "trtype": "TCP" 00:08:14.438 } 00:08:14.438 ] 00:08:14.438 }, 00:08:14.438 { 00:08:14.438 "admin_qpairs": 0, 00:08:14.438 "completed_nvme_io": 0, 00:08:14.438 "current_admin_qpairs": 0, 00:08:14.438 "current_io_qpairs": 0, 00:08:14.438 "io_qpairs": 0, 00:08:14.438 "name": "nvmf_tgt_poll_group_003", 00:08:14.439 "pending_bdev_io": 0, 00:08:14.439 "transports": [ 00:08:14.439 { 00:08:14.439 "trtype": "TCP" 00:08:14.439 } 00:08:14.439 ] 00:08:14.439 } 00:08:14.439 ], 00:08:14.439 "tick_rate": 2200000000 00:08:14.439 }' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 Malloc1 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 [2024-07-15 15:31:09.471540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -a 10.0.0.2 -s 4420 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -a 10.0.0.2 -s 4420 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -a 10.0.0.2 -s 4420 00:08:14.439 [2024-07-15 15:31:09.495782] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb' 00:08:14.439 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:14.439 could not add new controller: failed to write to nvme-fabrics device 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.439 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:14.697 15:31:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:14.697 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:14.697 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:14.697 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:14.697 15:31:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:16.596 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:16.596 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:16.596 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:16.596 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:16.596 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:16.596 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:16.596 15:31:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:16.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.854 15:31:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:16.854 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:16.854 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.854 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:16.854 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:16.854 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.854 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:16.855 [2024-07-15 15:31:11.777096] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb' 00:08:16.855 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:16.855 could not add new controller: failed to write to nvme-fabrics device 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:16.855 15:31:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:19.385 15:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:19.385 15:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:19.385 15:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:19.385 15:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:19.385 15:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:19.385 15:31:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:19.385 15:31:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.385 [2024-07-15 15:31:14.053880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.385 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.386 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.386 15:31:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.386 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:19.386 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.386 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:19.386 15:31:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.288 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.289 [2024-07-15 15:31:16.349068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.289 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.548 15:31:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.548 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:21.548 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.548 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:21.548 15:31:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:23.449 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:23.449 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:23.449 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.449 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:23.449 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.449 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:23.449 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.706 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.706 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:23.706 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:23.706 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.706 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.707 [2024-07-15 15:31:18.652411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.707 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.017 15:31:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.017 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:24.017 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.017 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:24.017 15:31:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:25.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.972 [2024-07-15 15:31:20.955655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.972 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.973 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:25.973 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.973 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.973 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.973 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:25.973 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.973 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.973 15:31:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.973 15:31:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:26.232 15:31:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:26.232 15:31:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:26.232 15:31:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:26.232 15:31:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:26.232 15:31:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:28.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.134 [2024-07-15 15:31:23.251147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.134 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:28.393 15:31:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 [2024-07-15 15:31:25.546283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 [2024-07-15 15:31:25.594299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 [2024-07-15 15:31:25.642375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 [2024-07-15 15:31:25.690411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 [2024-07-15 15:31:25.738444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:30.927 "poll_groups": [ 00:08:30.927 { 00:08:30.927 "admin_qpairs": 2, 00:08:30.927 "completed_nvme_io": 66, 00:08:30.927 "current_admin_qpairs": 0, 00:08:30.927 "current_io_qpairs": 0, 00:08:30.927 "io_qpairs": 16, 00:08:30.927 "name": "nvmf_tgt_poll_group_000", 00:08:30.927 "pending_bdev_io": 0, 00:08:30.927 "transports": [ 00:08:30.927 { 00:08:30.927 "trtype": "TCP" 00:08:30.927 } 00:08:30.927 ] 00:08:30.927 }, 00:08:30.927 { 00:08:30.927 "admin_qpairs": 3, 00:08:30.927 "completed_nvme_io": 68, 00:08:30.927 "current_admin_qpairs": 0, 00:08:30.927 "current_io_qpairs": 0, 00:08:30.927 "io_qpairs": 17, 00:08:30.927 "name": "nvmf_tgt_poll_group_001", 00:08:30.927 "pending_bdev_io": 0, 00:08:30.927 "transports": [ 00:08:30.927 { 00:08:30.927 "trtype": "TCP" 00:08:30.927 } 00:08:30.927 ] 00:08:30.927 }, 00:08:30.927 { 00:08:30.927 "admin_qpairs": 1, 00:08:30.927 "completed_nvme_io": 120, 00:08:30.927 "current_admin_qpairs": 0, 00:08:30.927 "current_io_qpairs": 0, 00:08:30.927 "io_qpairs": 19, 00:08:30.927 "name": "nvmf_tgt_poll_group_002", 00:08:30.927 "pending_bdev_io": 0, 00:08:30.927 "transports": [ 00:08:30.927 { 00:08:30.927 "trtype": "TCP" 00:08:30.927 } 00:08:30.927 ] 00:08:30.927 }, 00:08:30.927 { 00:08:30.927 "admin_qpairs": 1, 00:08:30.927 "completed_nvme_io": 166, 00:08:30.927 "current_admin_qpairs": 0, 00:08:30.927 "current_io_qpairs": 0, 00:08:30.927 "io_qpairs": 18, 00:08:30.927 "name": "nvmf_tgt_poll_group_003", 00:08:30.927 "pending_bdev_io": 0, 00:08:30.927 "transports": [ 00:08:30.927 { 00:08:30.927 "trtype": "TCP" 00:08:30.927 } 00:08:30.927 ] 00:08:30.927 } 00:08:30.927 ], 00:08:30.927 "tick_rate": 2200000000 00:08:30.927 }' 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.927 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.928 rmmod nvme_tcp 00:08:30.928 rmmod nvme_fabrics 00:08:30.928 rmmod nvme_keyring 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67126 ']' 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67126 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67126 ']' 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67126 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.928 15:31:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67126 00:08:30.928 killing process with pid 67126 00:08:30.928 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:30.928 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:30.928 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67126' 00:08:30.928 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67126 00:08:30.928 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67126 00:08:31.186 15:31:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:31.187 00:08:31.187 real 0m18.619s 00:08:31.187 user 1m10.206s 00:08:31.187 sys 0m2.507s 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.187 15:31:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.187 ************************************ 00:08:31.187 END TEST nvmf_rpc 00:08:31.187 ************************************ 00:08:31.187 15:31:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:31.187 15:31:26 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:31.187 15:31:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.187 15:31:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.187 15:31:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.187 ************************************ 00:08:31.187 START TEST nvmf_invalid 00:08:31.187 ************************************ 00:08:31.187 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:31.446 * Looking for test storage... 00:08:31.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:31.446 Cannot find device "nvmf_tgt_br" 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.446 Cannot find device "nvmf_tgt_br2" 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:31.446 Cannot find device "nvmf_tgt_br" 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:31.446 Cannot find device "nvmf_tgt_br2" 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:08:31.446 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:31.447 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:31.447 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.447 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:08:31.447 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.447 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:08:31.447 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.447 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.447 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:08:31.706 00:08:31.706 --- 10.0.0.2 ping statistics --- 00:08:31.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.706 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.706 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.706 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:08:31.706 00:08:31.706 --- 10.0.0.3 ping statistics --- 00:08:31.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.706 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:31.706 00:08:31.706 --- 10.0.0.1 ping statistics --- 00:08:31.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.706 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67638 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67638 00:08:31.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67638 ']' 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.706 15:31:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:31.965 [2024-07-15 15:31:26.839515] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:31.965 [2024-07-15 15:31:26.839841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.965 [2024-07-15 15:31:26.976626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.965 [2024-07-15 15:31:27.045494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.965 [2024-07-15 15:31:27.045800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.965 [2024-07-15 15:31:27.045982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.965 [2024-07-15 15:31:27.046146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.965 [2024-07-15 15:31:27.046183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.965 [2024-07-15 15:31:27.046418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.965 [2024-07-15 15:31:27.046515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.965 [2024-07-15 15:31:27.046592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.965 [2024-07-15 15:31:27.046599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.223 15:31:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.223 15:31:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:32.223 15:31:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.223 15:31:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.223 15:31:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:32.223 15:31:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.223 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:32.223 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18246 00:08:32.488 [2024-07-15 15:31:27.406017] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:32.488 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 15:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18246 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:32.488 request: 00:08:32.488 { 00:08:32.488 "method": "nvmf_create_subsystem", 00:08:32.488 "params": { 00:08:32.488 "nqn": "nqn.2016-06.io.spdk:cnode18246", 00:08:32.488 "tgt_name": "foobar" 00:08:32.488 } 00:08:32.488 } 00:08:32.488 Got JSON-RPC error response 00:08:32.488 GoRPCClient: error on JSON-RPC call' 00:08:32.488 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 15:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18246 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:32.488 request: 00:08:32.488 { 00:08:32.488 "method": "nvmf_create_subsystem", 00:08:32.488 "params": { 00:08:32.488 "nqn": "nqn.2016-06.io.spdk:cnode18246", 00:08:32.488 "tgt_name": "foobar" 00:08:32.488 } 00:08:32.488 } 00:08:32.488 Got JSON-RPC error response 00:08:32.488 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:32.489 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:32.489 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15785 00:08:32.755 [2024-07-15 15:31:27.734371] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15785: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:32.755 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 15:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15785 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:32.755 request: 00:08:32.755 { 00:08:32.755 "method": "nvmf_create_subsystem", 00:08:32.755 "params": { 00:08:32.755 "nqn": "nqn.2016-06.io.spdk:cnode15785", 00:08:32.755 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:32.755 } 00:08:32.755 } 00:08:32.755 Got JSON-RPC error response 00:08:32.755 GoRPCClient: error on JSON-RPC call' 00:08:32.755 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 15:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15785 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:32.755 request: 00:08:32.755 { 00:08:32.755 "method": "nvmf_create_subsystem", 00:08:32.755 "params": { 00:08:32.755 "nqn": "nqn.2016-06.io.spdk:cnode15785", 00:08:32.755 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:32.755 } 00:08:32.755 } 00:08:32.755 Got JSON-RPC error response 00:08:32.755 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:32.755 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:32.755 15:31:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12554 00:08:33.015 [2024-07-15 15:31:28.030617] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12554: invalid model number 'SPDK_Controller' 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 15:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode12554], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:33.015 request: 00:08:33.015 { 00:08:33.015 "method": "nvmf_create_subsystem", 00:08:33.015 "params": { 00:08:33.015 "nqn": "nqn.2016-06.io.spdk:cnode12554", 00:08:33.015 "model_number": "SPDK_Controller\u001f" 00:08:33.015 } 00:08:33.015 } 00:08:33.015 Got JSON-RPC error response 00:08:33.015 GoRPCClient: error on JSON-RPC call' 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 15:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode12554], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:33.015 request: 00:08:33.015 { 00:08:33.015 "method": "nvmf_create_subsystem", 00:08:33.015 "params": { 00:08:33.015 "nqn": "nqn.2016-06.io.spdk:cnode12554", 00:08:33.015 "model_number": "SPDK_Controller\u001f" 00:08:33.015 } 00:08:33.015 } 00:08:33.015 Got JSON-RPC error response 00:08:33.015 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:33.015 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.016 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ { == \- ]] 00:08:33.275 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '{w6EL[O\<4e3^BP`y6jw5k]C_f`uWlZ*eDtG?' 00:08:33.795 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d ' [1^jA~2>[O\<4e3^BP`y6jw5k]C_f`uWlZ*eDtG?' nqn.2016-06.io.spdk:cnode21602 00:08:34.054 [2024-07-15 15:31:28.931477] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21602: invalid model number ' [1^jA~2>[O\<4e3^BP`y6jw5k]C_f`uWlZ*eDtG?' 00:08:34.054 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/15 15:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number: [1^jA~2>[O\<4e3^BP`y6jw5k]C_f`uWlZ*eDtG? nqn:nqn.2016-06.io.spdk:cnode21602], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN [1^jA~2>[O\<4e3^BP`y6jw5k]C_f`uWlZ*eDtG? 00:08:34.054 request: 00:08:34.054 { 00:08:34.054 "method": "nvmf_create_subsystem", 00:08:34.054 "params": { 00:08:34.054 "nqn": "nqn.2016-06.io.spdk:cnode21602", 00:08:34.054 "model_number": " [1^jA~2>[O\\<4e3^BP`y6jw5k]C_f`uWlZ*eDtG?" 00:08:34.054 } 00:08:34.054 } 00:08:34.054 Got JSON-RPC error response 00:08:34.054 GoRPCClient: error on JSON-RPC call' 00:08:34.054 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/15 15:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number: [1^jA~2>[O\<4e3^BP`y6jw5k]C_f`uWlZ*eDtG? nqn:nqn.2016-06.io.spdk:cnode21602], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN [1^jA~2>[O\<4e3^BP`y6jw5k]C_f`uWlZ*eDtG? 00:08:34.054 request: 00:08:34.054 { 00:08:34.054 "method": "nvmf_create_subsystem", 00:08:34.054 "params": { 00:08:34.054 "nqn": "nqn.2016-06.io.spdk:cnode21602", 00:08:34.054 "model_number": " [1^jA~2>[O\\<4e3^BP`y6jw5k]C_f`uWlZ*eDtG?" 00:08:34.054 } 00:08:34.054 } 00:08:34.054 Got JSON-RPC error response 00:08:34.054 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:34.054 15:31:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:34.313 [2024-07-15 15:31:29.239812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.313 15:31:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:34.571 15:31:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:34.571 15:31:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:34.571 15:31:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:34.571 15:31:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:34.571 15:31:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:34.830 [2024-07-15 15:31:29.847727] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:34.830 15:31:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/15 15:31:29 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:08:34.830 request: 00:08:34.830 { 00:08:34.830 "method": "nvmf_subsystem_remove_listener", 00:08:34.830 "params": { 00:08:34.830 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:34.830 "listen_address": { 00:08:34.830 "trtype": "tcp", 00:08:34.830 "traddr": "", 00:08:34.830 "trsvcid": "4421" 00:08:34.830 } 00:08:34.830 } 00:08:34.830 } 00:08:34.830 Got JSON-RPC error response 00:08:34.830 GoRPCClient: error on JSON-RPC call' 00:08:34.830 15:31:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/15 15:31:29 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:08:34.830 request: 00:08:34.830 { 00:08:34.830 "method": "nvmf_subsystem_remove_listener", 00:08:34.830 "params": { 00:08:34.830 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:34.830 "listen_address": { 00:08:34.830 "trtype": "tcp", 00:08:34.830 "traddr": "", 00:08:34.830 "trsvcid": "4421" 00:08:34.830 } 00:08:34.830 } 00:08:34.830 } 00:08:34.830 Got JSON-RPC error response 00:08:34.830 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:34.830 15:31:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2036 -i 0 00:08:35.088 [2024-07-15 15:31:30.087894] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2036: invalid cntlid range [0-65519] 00:08:35.089 15:31:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/15 15:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2036], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:08:35.089 request: 00:08:35.089 { 00:08:35.089 "method": "nvmf_create_subsystem", 00:08:35.089 "params": { 00:08:35.089 "nqn": "nqn.2016-06.io.spdk:cnode2036", 00:08:35.089 "min_cntlid": 0 00:08:35.089 } 00:08:35.089 } 00:08:35.089 Got JSON-RPC error response 00:08:35.089 GoRPCClient: error on JSON-RPC call' 00:08:35.089 15:31:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/15 15:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2036], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:08:35.089 request: 00:08:35.089 { 00:08:35.089 "method": "nvmf_create_subsystem", 00:08:35.089 "params": { 00:08:35.089 "nqn": "nqn.2016-06.io.spdk:cnode2036", 00:08:35.089 "min_cntlid": 0 00:08:35.089 } 00:08:35.089 } 00:08:35.089 Got JSON-RPC error response 00:08:35.089 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:35.089 15:31:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10862 -i 65520 00:08:35.348 [2024-07-15 15:31:30.376233] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10862: invalid cntlid range [65520-65519] 00:08:35.348 15:31:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/15 15:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10862], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:08:35.348 request: 00:08:35.348 { 00:08:35.348 "method": "nvmf_create_subsystem", 00:08:35.348 "params": { 00:08:35.348 "nqn": "nqn.2016-06.io.spdk:cnode10862", 00:08:35.348 "min_cntlid": 65520 00:08:35.348 } 00:08:35.348 } 00:08:35.348 Got JSON-RPC error response 00:08:35.348 GoRPCClient: error on JSON-RPC call' 00:08:35.348 15:31:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/15 15:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10862], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:08:35.348 request: 00:08:35.348 { 00:08:35.348 "method": "nvmf_create_subsystem", 00:08:35.348 "params": { 00:08:35.348 "nqn": "nqn.2016-06.io.spdk:cnode10862", 00:08:35.348 "min_cntlid": 65520 00:08:35.348 } 00:08:35.348 } 00:08:35.348 Got JSON-RPC error response 00:08:35.348 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:35.348 15:31:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8198 -I 0 00:08:35.607 [2024-07-15 15:31:30.712720] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8198: invalid cntlid range [1-0] 00:08:35.866 15:31:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/15 15:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8198], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:08:35.866 request: 00:08:35.866 { 00:08:35.866 "method": "nvmf_create_subsystem", 00:08:35.866 "params": { 00:08:35.866 "nqn": "nqn.2016-06.io.spdk:cnode8198", 00:08:35.866 "max_cntlid": 0 00:08:35.866 } 00:08:35.866 } 00:08:35.866 Got JSON-RPC error response 00:08:35.866 GoRPCClient: error on JSON-RPC call' 00:08:35.866 15:31:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/15 15:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8198], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:08:35.866 request: 00:08:35.866 { 00:08:35.866 "method": "nvmf_create_subsystem", 00:08:35.866 "params": { 00:08:35.866 "nqn": "nqn.2016-06.io.spdk:cnode8198", 00:08:35.866 "max_cntlid": 0 00:08:35.866 } 00:08:35.866 } 00:08:35.866 Got JSON-RPC error response 00:08:35.866 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:35.866 15:31:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15764 -I 65520 00:08:36.125 [2024-07-15 15:31:31.000942] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15764: invalid cntlid range [1-65520] 00:08:36.125 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/15 15:31:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode15764], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:08:36.125 request: 00:08:36.125 { 00:08:36.125 "method": "nvmf_create_subsystem", 00:08:36.125 "params": { 00:08:36.125 "nqn": "nqn.2016-06.io.spdk:cnode15764", 00:08:36.125 "max_cntlid": 65520 00:08:36.125 } 00:08:36.125 } 00:08:36.125 Got JSON-RPC error response 00:08:36.125 GoRPCClient: error on JSON-RPC call' 00:08:36.125 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/15 15:31:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode15764], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:08:36.125 request: 00:08:36.125 { 00:08:36.125 "method": "nvmf_create_subsystem", 00:08:36.125 "params": { 00:08:36.125 "nqn": "nqn.2016-06.io.spdk:cnode15764", 00:08:36.125 "max_cntlid": 65520 00:08:36.125 } 00:08:36.125 } 00:08:36.125 Got JSON-RPC error response 00:08:36.125 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:36.125 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4306 -i 6 -I 5 00:08:36.384 [2024-07-15 15:31:31.293255] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4306: invalid cntlid range [6-5] 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/15 15:31:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode4306], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:08:36.384 request: 00:08:36.384 { 00:08:36.384 "method": "nvmf_create_subsystem", 00:08:36.384 "params": { 00:08:36.384 "nqn": "nqn.2016-06.io.spdk:cnode4306", 00:08:36.384 "min_cntlid": 6, 00:08:36.384 "max_cntlid": 5 00:08:36.384 } 00:08:36.384 } 00:08:36.384 Got JSON-RPC error response 00:08:36.384 GoRPCClient: error on JSON-RPC call' 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/15 15:31:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode4306], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:08:36.384 request: 00:08:36.384 { 00:08:36.384 "method": "nvmf_create_subsystem", 00:08:36.384 "params": { 00:08:36.384 "nqn": "nqn.2016-06.io.spdk:cnode4306", 00:08:36.384 "min_cntlid": 6, 00:08:36.384 "max_cntlid": 5 00:08:36.384 } 00:08:36.384 } 00:08:36.384 Got JSON-RPC error response 00:08:36.384 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:36.384 { 00:08:36.384 "name": "foobar", 00:08:36.384 "method": "nvmf_delete_target", 00:08:36.384 "req_id": 1 00:08:36.384 } 00:08:36.384 Got JSON-RPC error response 00:08:36.384 response: 00:08:36.384 { 00:08:36.384 "code": -32602, 00:08:36.384 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:36.384 }' 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:36.384 { 00:08:36.384 "name": "foobar", 00:08:36.384 "method": "nvmf_delete_target", 00:08:36.384 "req_id": 1 00:08:36.384 } 00:08:36.384 Got JSON-RPC error response 00:08:36.384 response: 00:08:36.384 { 00:08:36.384 "code": -32602, 00:08:36.384 "message": "The specified target doesn't exist, cannot delete it." 00:08:36.384 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.384 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.384 rmmod nvme_tcp 00:08:36.384 rmmod nvme_fabrics 00:08:36.384 rmmod nvme_keyring 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67638 ']' 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67638 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 67638 ']' 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 67638 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67638 00:08:36.643 killing process with pid 67638 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67638' 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 67638 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 67638 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:36.643 00:08:36.643 real 0m5.480s 00:08:36.643 user 0m22.177s 00:08:36.643 sys 0m1.213s 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.643 15:31:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:36.643 ************************************ 00:08:36.643 END TEST nvmf_invalid 00:08:36.643 ************************************ 00:08:36.902 15:31:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:36.903 15:31:31 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:36.903 15:31:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:36.903 15:31:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.903 15:31:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.903 ************************************ 00:08:36.903 START TEST nvmf_abort 00:08:36.903 ************************************ 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:36.903 * Looking for test storage... 00:08:36.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:36.903 Cannot find device "nvmf_tgt_br" 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:36.903 Cannot find device "nvmf_tgt_br2" 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:36.903 Cannot find device "nvmf_tgt_br" 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:36.903 Cannot find device "nvmf_tgt_br2" 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:08:36.903 15:31:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:37.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:37.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:37.162 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:37.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:08:37.162 00:08:37.162 --- 10.0.0.2 ping statistics --- 00:08:37.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.162 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:37.163 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:37.163 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:37.163 00:08:37.163 --- 10.0.0.3 ping statistics --- 00:08:37.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.163 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:37.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:37.163 00:08:37.163 --- 10.0.0.1 ping statistics --- 00:08:37.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.163 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68138 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68138 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68138 ']' 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.163 15:31:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.421 [2024-07-15 15:31:32.304322] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:37.421 [2024-07-15 15:31:32.304410] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.421 [2024-07-15 15:31:32.441418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.421 [2024-07-15 15:31:32.501729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.421 [2024-07-15 15:31:32.502078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.421 [2024-07-15 15:31:32.502256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.421 [2024-07-15 15:31:32.502402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.421 [2024-07-15 15:31:32.502561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.421 [2024-07-15 15:31:32.502849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.421 [2024-07-15 15:31:32.502924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.421 [2024-07-15 15:31:32.502944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 [2024-07-15 15:31:33.340243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 Malloc0 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 Delay0 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 [2024-07-15 15:31:33.402517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.356 15:31:33 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:38.615 [2024-07-15 15:31:33.582105] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:40.514 Initializing NVMe Controllers 00:08:40.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:40.514 controller IO queue size 128 less than required 00:08:40.514 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:40.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:40.514 Initialization complete. Launching workers. 00:08:40.514 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31323 00:08:40.514 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31384, failed to submit 62 00:08:40.514 success 31327, unsuccess 57, failed 0 00:08:40.514 15:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.514 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.514 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:40.514 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.514 15:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:40.514 15:31:35 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:40.514 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.514 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.773 rmmod nvme_tcp 00:08:40.773 rmmod nvme_fabrics 00:08:40.773 rmmod nvme_keyring 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68138 ']' 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68138 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68138 ']' 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68138 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68138 00:08:40.773 killing process with pid 68138 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68138' 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68138 00:08:40.773 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68138 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:41.032 ************************************ 00:08:41.032 END TEST nvmf_abort 00:08:41.032 ************************************ 00:08:41.032 00:08:41.032 real 0m4.153s 00:08:41.032 user 0m12.224s 00:08:41.032 sys 0m0.906s 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.032 15:31:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:41.032 15:31:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:41.032 15:31:36 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:41.032 15:31:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.032 15:31:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.032 15:31:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.032 ************************************ 00:08:41.032 START TEST nvmf_ns_hotplug_stress 00:08:41.032 ************************************ 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:41.032 * Looking for test storage... 00:08:41.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:41.032 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:41.033 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:41.291 Cannot find device "nvmf_tgt_br" 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.291 Cannot find device "nvmf_tgt_br2" 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:41.291 Cannot find device "nvmf_tgt_br" 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:41.291 Cannot find device "nvmf_tgt_br2" 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:41.291 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:41.549 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:41.549 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:41.549 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:41.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:08:41.550 00:08:41.550 --- 10.0.0.2 ping statistics --- 00:08:41.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.550 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:41.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:41.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:41.550 00:08:41.550 --- 10.0.0.3 ping statistics --- 00:08:41.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.550 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:41.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:41.550 00:08:41.550 --- 10.0.0.1 ping statistics --- 00:08:41.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.550 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68395 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68395 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68395 ']' 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.550 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.550 [2024-07-15 15:31:36.529952] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:41.550 [2024-07-15 15:31:36.530034] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.550 [2024-07-15 15:31:36.666358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:41.824 [2024-07-15 15:31:36.738004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.824 [2024-07-15 15:31:36.738669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.824 [2024-07-15 15:31:36.738959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.824 [2024-07-15 15:31:36.739274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.824 [2024-07-15 15:31:36.739493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.824 [2024-07-15 15:31:36.739843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.825 [2024-07-15 15:31:36.739928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.825 [2024-07-15 15:31:36.740032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.825 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.825 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:41.825 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.825 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.825 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.825 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.825 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:41.825 15:31:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:42.102 [2024-07-15 15:31:37.128567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.103 15:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.361 15:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.618 [2024-07-15 15:31:37.670453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.618 15:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.876 15:31:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:43.133 Malloc0 00:08:43.133 15:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:43.699 Delay0 00:08:43.699 15:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.699 15:31:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:43.957 NULL1 00:08:43.957 15:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:44.214 15:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:44.214 15:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68519 00:08:44.214 15:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:44.214 15:31:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.589 Read completed with error (sct=0, sc=11) 00:08:45.589 15:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.847 15:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:45.847 15:31:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:46.105 true 00:08:46.105 15:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:46.105 15:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.038 15:31:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.038 15:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:47.038 15:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:47.295 true 00:08:47.295 15:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:47.295 15:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.553 15:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.810 15:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:47.810 15:31:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:48.069 true 00:08:48.069 15:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:48.069 15:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.345 15:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.603 15:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:48.603 15:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:48.861 true 00:08:48.861 15:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:48.861 15:31:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.797 15:31:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.055 15:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:50.055 15:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:50.313 true 00:08:50.313 15:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:50.313 15:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.571 15:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.830 15:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:50.830 15:31:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:51.088 true 00:08:51.088 15:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:51.088 15:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.033 15:31:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.033 15:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:52.033 15:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:52.292 true 00:08:52.292 15:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:52.292 15:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.550 15:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.808 15:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:52.808 15:31:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:53.066 true 00:08:53.066 15:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:53.066 15:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.325 15:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.583 15:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:53.583 15:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:53.842 true 00:08:53.842 15:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:53.842 15:31:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.777 15:31:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.343 15:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:55.343 15:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:55.343 true 00:08:55.343 15:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:55.343 15:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.602 15:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.861 15:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:55.861 15:31:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:56.120 true 00:08:56.120 15:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:56.120 15:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.379 15:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.637 15:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:56.637 15:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:56.895 true 00:08:56.895 15:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:56.895 15:31:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.830 15:31:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.089 15:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:58.089 15:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:58.347 true 00:08:58.347 15:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:58.347 15:31:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.281 15:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.540 15:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:59.540 15:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:59.799 true 00:08:59.799 15:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:08:59.799 15:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.058 15:31:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.316 15:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:00.316 15:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:00.576 true 00:09:00.576 15:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:00.576 15:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.834 15:31:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.093 15:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:01.093 15:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:01.352 true 00:09:01.352 15:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:01.352 15:31:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.286 15:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.543 15:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:02.543 15:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:02.800 true 00:09:02.800 15:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:02.800 15:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.057 15:31:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.313 15:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:03.313 15:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:03.624 true 00:09:03.624 15:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:03.624 15:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.624 15:31:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.189 15:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:04.189 15:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:04.189 true 00:09:04.189 15:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:04.189 15:31:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.122 15:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.689 15:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:05.689 15:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:05.689 true 00:09:05.947 15:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:05.947 15:32:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.206 15:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.206 15:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:06.206 15:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:06.464 true 00:09:06.464 15:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:06.464 15:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.723 15:32:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.981 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:06.981 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:07.257 true 00:09:07.257 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:07.257 15:32:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.192 15:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.459 15:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:08.459 15:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:08.718 true 00:09:08.718 15:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:08.718 15:32:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.978 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.238 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:09.238 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:09.496 true 00:09:09.496 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:09.496 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.755 15:32:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.014 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:10.014 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:10.272 true 00:09:10.272 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:10.272 15:32:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.208 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.466 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:11.466 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:11.724 true 00:09:11.724 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:11.724 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.983 15:32:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.242 15:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:12.242 15:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:12.500 true 00:09:12.501 15:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:12.501 15:32:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.436 15:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.436 15:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:13.436 15:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:13.694 true 00:09:13.694 15:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:13.694 15:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.952 15:32:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.212 15:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:14.212 15:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:14.470 true 00:09:14.470 15:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:14.470 15:32:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.404 Initializing NVMe Controllers 00:09:15.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:15.405 Controller IO queue size 128, less than required. 00:09:15.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:15.405 Controller IO queue size 128, less than required. 00:09:15.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:15.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:15.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:15.405 Initialization complete. Launching workers. 00:09:15.405 ======================================================== 00:09:15.405 Latency(us) 00:09:15.405 Device Information : IOPS MiB/s Average min max 00:09:15.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 613.07 0.30 99821.31 3263.31 1058674.74 00:09:15.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9234.86 4.51 13860.90 3818.31 573555.59 00:09:15.405 ======================================================== 00:09:15.405 Total : 9847.92 4.81 19212.22 3263.31 1058674.74 00:09:15.405 00:09:15.405 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.405 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:15.405 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:15.663 true 00:09:15.922 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68519 00:09:15.922 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68519) - No such process 00:09:15.922 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68519 00:09:15.922 15:32:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.922 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.180 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:16.180 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:16.180 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:16.180 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.180 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:16.469 null0 00:09:16.469 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.469 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.469 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:16.758 null1 00:09:16.758 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.758 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.758 15:32:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:17.017 null2 00:09:17.017 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.017 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.017 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:17.275 null3 00:09:17.275 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.275 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.275 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:17.533 null4 00:09:17.533 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.533 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.533 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:17.791 null5 00:09:17.791 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.791 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.791 15:32:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:18.049 null6 00:09:18.049 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.049 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.049 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:18.307 null7 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.565 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69581 69582 69584 69587 69588 69590 69591 69595 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.566 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.825 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.084 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.084 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.084 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.084 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.084 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.084 15:32:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.084 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.342 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.342 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.342 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.342 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.342 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.342 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.342 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.602 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.860 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.860 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.861 15:32:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.119 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.378 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.637 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.895 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.895 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.895 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.895 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.895 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.895 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.896 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.896 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.896 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.896 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.896 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.896 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.896 15:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.155 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.413 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.675 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.676 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.676 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.934 15:32:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.934 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.934 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:22.191 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.191 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:22.191 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.191 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.191 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:22.191 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.191 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.191 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.449 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.711 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.969 15:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:22.969 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.969 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.969 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:22.969 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.969 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.969 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:23.226 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.481 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.482 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:23.482 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.482 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.482 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:23.739 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:24.005 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.005 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.005 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.005 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.005 15:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.005 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:24.005 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.005 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.005 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.005 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.005 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.005 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.005 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.005 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.263 rmmod nvme_tcp 00:09:24.263 rmmod nvme_fabrics 00:09:24.263 rmmod nvme_keyring 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68395 ']' 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68395 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68395 ']' 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68395 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68395 00:09:24.263 killing process with pid 68395 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68395' 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68395 00:09:24.263 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68395 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:24.553 00:09:24.553 real 0m43.521s 00:09:24.553 user 3m31.231s 00:09:24.553 sys 0m12.323s 00:09:24.553 ************************************ 00:09:24.553 END TEST nvmf_ns_hotplug_stress 00:09:24.553 ************************************ 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.553 15:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.553 15:32:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:24.553 15:32:19 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:24.553 15:32:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:24.553 15:32:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.553 15:32:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.553 ************************************ 00:09:24.553 START TEST nvmf_connect_stress 00:09:24.553 ************************************ 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:24.553 * Looking for test storage... 00:09:24.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.553 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.554 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.554 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:24.812 Cannot find device "nvmf_tgt_br" 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.812 Cannot find device "nvmf_tgt_br2" 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:24.812 Cannot find device "nvmf_tgt_br" 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:24.812 Cannot find device "nvmf_tgt_br2" 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:24.812 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:25.071 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:25.071 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:25.071 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:25.071 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:25.071 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:25.071 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:25.071 15:32:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:25.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:09:25.071 00:09:25.071 --- 10.0.0.2 ping statistics --- 00:09:25.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.071 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:25.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:25.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:09:25.071 00:09:25.071 --- 10.0.0.3 ping statistics --- 00:09:25.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.071 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:25.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:25.071 00:09:25.071 --- 10.0.0.1 ping statistics --- 00:09:25.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.071 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=70895 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 70895 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 70895 ']' 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.071 15:32:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.071 [2024-07-15 15:32:20.085845] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:25.071 [2024-07-15 15:32:20.085984] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.329 [2024-07-15 15:32:20.221368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:25.329 [2024-07-15 15:32:20.280576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.329 [2024-07-15 15:32:20.280663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.329 [2024-07-15 15:32:20.280690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.329 [2024-07-15 15:32:20.280715] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.329 [2024-07-15 15:32:20.280728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.329 [2024-07-15 15:32:20.281194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.329 [2024-07-15 15:32:20.281456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.329 [2024-07-15 15:32:20.281462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.266 [2024-07-15 15:32:21.089836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.266 [2024-07-15 15:32:21.107248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.266 NULL1 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=70947 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.266 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.525 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.525 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:26.525 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.525 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.525 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.783 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.783 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:26.783 15:32:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.783 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.783 15:32:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.040 15:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.040 15:32:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:27.040 15:32:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.040 15:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.040 15:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.606 15:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.606 15:32:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:27.606 15:32:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.606 15:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.606 15:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.864 15:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.864 15:32:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:27.864 15:32:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.864 15:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.864 15:32:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.123 15:32:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.123 15:32:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:28.123 15:32:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.123 15:32:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.123 15:32:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.380 15:32:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.380 15:32:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:28.380 15:32:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.380 15:32:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.380 15:32:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.945 15:32:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.946 15:32:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:28.946 15:32:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.946 15:32:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.946 15:32:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 15:32:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.204 15:32:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:29.204 15:32:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.204 15:32:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.204 15:32:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.463 15:32:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.463 15:32:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:29.463 15:32:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.463 15:32:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.463 15:32:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.722 15:32:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.722 15:32:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:29.722 15:32:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.722 15:32:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.722 15:32:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.980 15:32:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.980 15:32:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:29.980 15:32:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.980 15:32:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.980 15:32:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.547 15:32:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.547 15:32:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:30.547 15:32:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.547 15:32:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.547 15:32:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 15:32:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.806 15:32:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:30.806 15:32:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.806 15:32:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.806 15:32:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.065 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.065 15:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:31.065 15:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.065 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.065 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.324 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.324 15:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:31.324 15:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.324 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.324 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.583 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.583 15:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:31.583 15:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.583 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.583 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.153 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.153 15:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:32.153 15:32:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.153 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.153 15:32:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.411 15:32:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.411 15:32:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:32.411 15:32:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.411 15:32:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.411 15:32:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.669 15:32:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.669 15:32:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:32.669 15:32:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.669 15:32:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.669 15:32:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.927 15:32:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.927 15:32:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:32.927 15:32:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.927 15:32:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.927 15:32:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.185 15:32:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.185 15:32:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:33.185 15:32:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.185 15:32:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.185 15:32:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.751 15:32:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.751 15:32:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:33.751 15:32:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.751 15:32:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.751 15:32:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.009 15:32:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.009 15:32:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:34.009 15:32:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.009 15:32:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.009 15:32:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.268 15:32:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.268 15:32:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:34.268 15:32:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.268 15:32:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.268 15:32:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.527 15:32:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.527 15:32:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:34.527 15:32:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.527 15:32:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.527 15:32:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.785 15:32:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.785 15:32:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:34.785 15:32:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.785 15:32:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.785 15:32:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.352 15:32:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.352 15:32:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:35.352 15:32:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.352 15:32:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.352 15:32:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.610 15:32:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.610 15:32:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:35.610 15:32:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.610 15:32:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.610 15:32:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.868 15:32:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.868 15:32:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:35.868 15:32:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.868 15:32:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.868 15:32:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.126 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.126 15:32:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:36.126 15:32:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.126 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.126 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.385 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:36.385 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.385 15:32:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70947 00:09:36.385 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (70947) - No such process 00:09:36.385 15:32:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 70947 00:09:36.385 15:32:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:36.385 15:32:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:36.385 15:32:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:36.385 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.385 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.644 rmmod nvme_tcp 00:09:36.644 rmmod nvme_fabrics 00:09:36.644 rmmod nvme_keyring 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 70895 ']' 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 70895 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 70895 ']' 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 70895 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70895 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:36.644 killing process with pid 70895 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70895' 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 70895 00:09:36.644 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 70895 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:36.902 00:09:36.902 real 0m12.231s 00:09:36.902 user 0m40.886s 00:09:36.902 sys 0m3.333s 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.902 15:32:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.902 ************************************ 00:09:36.902 END TEST nvmf_connect_stress 00:09:36.902 ************************************ 00:09:36.902 15:32:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:36.902 15:32:31 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:36.902 15:32:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.902 15:32:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.902 15:32:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.902 ************************************ 00:09:36.902 START TEST nvmf_fused_ordering 00:09:36.903 ************************************ 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:36.903 * Looking for test storage... 00:09:36.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:36.903 15:32:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:36.903 Cannot find device "nvmf_tgt_br" 00:09:36.903 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:09:36.903 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.903 Cannot find device "nvmf_tgt_br2" 00:09:36.903 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:09:36.903 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:36.903 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:36.903 Cannot find device "nvmf_tgt_br" 00:09:36.903 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:09:36.903 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:37.162 Cannot find device "nvmf_tgt_br2" 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:37.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:09:37.162 00:09:37.162 --- 10.0.0.2 ping statistics --- 00:09:37.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.162 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:37.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:09:37.162 00:09:37.162 --- 10.0.0.3 ping statistics --- 00:09:37.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.162 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:37.162 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:37.421 00:09:37.421 --- 10.0.0.1 ping statistics --- 00:09:37.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.421 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71268 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71268 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71268 ']' 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.421 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.421 [2024-07-15 15:32:32.380648] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:37.421 [2024-07-15 15:32:32.380760] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.421 [2024-07-15 15:32:32.516212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.680 [2024-07-15 15:32:32.571126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.681 [2024-07-15 15:32:32.571195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.681 [2024-07-15 15:32:32.571223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.681 [2024-07-15 15:32:32.571231] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.681 [2024-07-15 15:32:32.571238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.681 [2024-07-15 15:32:32.571267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.681 [2024-07-15 15:32:32.703192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.681 [2024-07-15 15:32:32.719269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.681 NULL1 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.681 15:32:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:37.681 [2024-07-15 15:32:32.773658] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:37.681 [2024-07-15 15:32:32.773718] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71304 ] 00:09:38.249 Attached to nqn.2016-06.io.spdk:cnode1 00:09:38.249 Namespace ID: 1 size: 1GB 00:09:38.249 fused_ordering(0) 00:09:38.249 fused_ordering(1) 00:09:38.249 fused_ordering(2) 00:09:38.249 fused_ordering(3) 00:09:38.249 fused_ordering(4) 00:09:38.249 fused_ordering(5) 00:09:38.249 fused_ordering(6) 00:09:38.249 fused_ordering(7) 00:09:38.249 fused_ordering(8) 00:09:38.249 fused_ordering(9) 00:09:38.249 fused_ordering(10) 00:09:38.249 fused_ordering(11) 00:09:38.249 fused_ordering(12) 00:09:38.249 fused_ordering(13) 00:09:38.249 fused_ordering(14) 00:09:38.249 fused_ordering(15) 00:09:38.249 fused_ordering(16) 00:09:38.249 fused_ordering(17) 00:09:38.249 fused_ordering(18) 00:09:38.249 fused_ordering(19) 00:09:38.249 fused_ordering(20) 00:09:38.249 fused_ordering(21) 00:09:38.249 fused_ordering(22) 00:09:38.249 fused_ordering(23) 00:09:38.249 fused_ordering(24) 00:09:38.249 fused_ordering(25) 00:09:38.249 fused_ordering(26) 00:09:38.249 fused_ordering(27) 00:09:38.249 fused_ordering(28) 00:09:38.249 fused_ordering(29) 00:09:38.249 fused_ordering(30) 00:09:38.249 fused_ordering(31) 00:09:38.249 fused_ordering(32) 00:09:38.249 fused_ordering(33) 00:09:38.249 fused_ordering(34) 00:09:38.249 fused_ordering(35) 00:09:38.249 fused_ordering(36) 00:09:38.249 fused_ordering(37) 00:09:38.249 fused_ordering(38) 00:09:38.249 fused_ordering(39) 00:09:38.249 fused_ordering(40) 00:09:38.249 fused_ordering(41) 00:09:38.249 fused_ordering(42) 00:09:38.249 fused_ordering(43) 00:09:38.249 fused_ordering(44) 00:09:38.249 fused_ordering(45) 00:09:38.249 fused_ordering(46) 00:09:38.249 fused_ordering(47) 00:09:38.249 fused_ordering(48) 00:09:38.249 fused_ordering(49) 00:09:38.249 fused_ordering(50) 00:09:38.249 fused_ordering(51) 00:09:38.249 fused_ordering(52) 00:09:38.249 fused_ordering(53) 00:09:38.249 fused_ordering(54) 00:09:38.249 fused_ordering(55) 00:09:38.249 fused_ordering(56) 00:09:38.249 fused_ordering(57) 00:09:38.249 fused_ordering(58) 00:09:38.249 fused_ordering(59) 00:09:38.249 fused_ordering(60) 00:09:38.249 fused_ordering(61) 00:09:38.249 fused_ordering(62) 00:09:38.249 fused_ordering(63) 00:09:38.249 fused_ordering(64) 00:09:38.249 fused_ordering(65) 00:09:38.249 fused_ordering(66) 00:09:38.249 fused_ordering(67) 00:09:38.249 fused_ordering(68) 00:09:38.249 fused_ordering(69) 00:09:38.249 fused_ordering(70) 00:09:38.249 fused_ordering(71) 00:09:38.249 fused_ordering(72) 00:09:38.249 fused_ordering(73) 00:09:38.249 fused_ordering(74) 00:09:38.249 fused_ordering(75) 00:09:38.249 fused_ordering(76) 00:09:38.249 fused_ordering(77) 00:09:38.249 fused_ordering(78) 00:09:38.249 fused_ordering(79) 00:09:38.249 fused_ordering(80) 00:09:38.249 fused_ordering(81) 00:09:38.249 fused_ordering(82) 00:09:38.249 fused_ordering(83) 00:09:38.249 fused_ordering(84) 00:09:38.249 fused_ordering(85) 00:09:38.249 fused_ordering(86) 00:09:38.249 fused_ordering(87) 00:09:38.249 fused_ordering(88) 00:09:38.249 fused_ordering(89) 00:09:38.249 fused_ordering(90) 00:09:38.249 fused_ordering(91) 00:09:38.249 fused_ordering(92) 00:09:38.249 fused_ordering(93) 00:09:38.249 fused_ordering(94) 00:09:38.249 fused_ordering(95) 00:09:38.249 fused_ordering(96) 00:09:38.249 fused_ordering(97) 00:09:38.249 fused_ordering(98) 00:09:38.249 fused_ordering(99) 00:09:38.249 fused_ordering(100) 00:09:38.249 fused_ordering(101) 00:09:38.249 fused_ordering(102) 00:09:38.249 fused_ordering(103) 00:09:38.249 fused_ordering(104) 00:09:38.249 fused_ordering(105) 00:09:38.249 fused_ordering(106) 00:09:38.249 fused_ordering(107) 00:09:38.249 fused_ordering(108) 00:09:38.249 fused_ordering(109) 00:09:38.249 fused_ordering(110) 00:09:38.249 fused_ordering(111) 00:09:38.249 fused_ordering(112) 00:09:38.249 fused_ordering(113) 00:09:38.249 fused_ordering(114) 00:09:38.249 fused_ordering(115) 00:09:38.249 fused_ordering(116) 00:09:38.249 fused_ordering(117) 00:09:38.249 fused_ordering(118) 00:09:38.249 fused_ordering(119) 00:09:38.249 fused_ordering(120) 00:09:38.249 fused_ordering(121) 00:09:38.249 fused_ordering(122) 00:09:38.249 fused_ordering(123) 00:09:38.249 fused_ordering(124) 00:09:38.249 fused_ordering(125) 00:09:38.249 fused_ordering(126) 00:09:38.249 fused_ordering(127) 00:09:38.249 fused_ordering(128) 00:09:38.249 fused_ordering(129) 00:09:38.249 fused_ordering(130) 00:09:38.249 fused_ordering(131) 00:09:38.249 fused_ordering(132) 00:09:38.249 fused_ordering(133) 00:09:38.249 fused_ordering(134) 00:09:38.249 fused_ordering(135) 00:09:38.249 fused_ordering(136) 00:09:38.249 fused_ordering(137) 00:09:38.249 fused_ordering(138) 00:09:38.249 fused_ordering(139) 00:09:38.249 fused_ordering(140) 00:09:38.249 fused_ordering(141) 00:09:38.249 fused_ordering(142) 00:09:38.249 fused_ordering(143) 00:09:38.249 fused_ordering(144) 00:09:38.249 fused_ordering(145) 00:09:38.249 fused_ordering(146) 00:09:38.249 fused_ordering(147) 00:09:38.249 fused_ordering(148) 00:09:38.249 fused_ordering(149) 00:09:38.249 fused_ordering(150) 00:09:38.249 fused_ordering(151) 00:09:38.249 fused_ordering(152) 00:09:38.249 fused_ordering(153) 00:09:38.249 fused_ordering(154) 00:09:38.249 fused_ordering(155) 00:09:38.249 fused_ordering(156) 00:09:38.249 fused_ordering(157) 00:09:38.249 fused_ordering(158) 00:09:38.249 fused_ordering(159) 00:09:38.249 fused_ordering(160) 00:09:38.249 fused_ordering(161) 00:09:38.249 fused_ordering(162) 00:09:38.249 fused_ordering(163) 00:09:38.249 fused_ordering(164) 00:09:38.249 fused_ordering(165) 00:09:38.249 fused_ordering(166) 00:09:38.249 fused_ordering(167) 00:09:38.249 fused_ordering(168) 00:09:38.249 fused_ordering(169) 00:09:38.249 fused_ordering(170) 00:09:38.249 fused_ordering(171) 00:09:38.249 fused_ordering(172) 00:09:38.249 fused_ordering(173) 00:09:38.249 fused_ordering(174) 00:09:38.249 fused_ordering(175) 00:09:38.249 fused_ordering(176) 00:09:38.249 fused_ordering(177) 00:09:38.249 fused_ordering(178) 00:09:38.249 fused_ordering(179) 00:09:38.249 fused_ordering(180) 00:09:38.249 fused_ordering(181) 00:09:38.249 fused_ordering(182) 00:09:38.249 fused_ordering(183) 00:09:38.249 fused_ordering(184) 00:09:38.249 fused_ordering(185) 00:09:38.249 fused_ordering(186) 00:09:38.249 fused_ordering(187) 00:09:38.249 fused_ordering(188) 00:09:38.249 fused_ordering(189) 00:09:38.249 fused_ordering(190) 00:09:38.249 fused_ordering(191) 00:09:38.249 fused_ordering(192) 00:09:38.249 fused_ordering(193) 00:09:38.249 fused_ordering(194) 00:09:38.249 fused_ordering(195) 00:09:38.249 fused_ordering(196) 00:09:38.249 fused_ordering(197) 00:09:38.249 fused_ordering(198) 00:09:38.249 fused_ordering(199) 00:09:38.249 fused_ordering(200) 00:09:38.249 fused_ordering(201) 00:09:38.249 fused_ordering(202) 00:09:38.249 fused_ordering(203) 00:09:38.249 fused_ordering(204) 00:09:38.249 fused_ordering(205) 00:09:38.508 fused_ordering(206) 00:09:38.508 fused_ordering(207) 00:09:38.508 fused_ordering(208) 00:09:38.508 fused_ordering(209) 00:09:38.508 fused_ordering(210) 00:09:38.508 fused_ordering(211) 00:09:38.508 fused_ordering(212) 00:09:38.508 fused_ordering(213) 00:09:38.508 fused_ordering(214) 00:09:38.508 fused_ordering(215) 00:09:38.508 fused_ordering(216) 00:09:38.508 fused_ordering(217) 00:09:38.508 fused_ordering(218) 00:09:38.508 fused_ordering(219) 00:09:38.508 fused_ordering(220) 00:09:38.508 fused_ordering(221) 00:09:38.508 fused_ordering(222) 00:09:38.508 fused_ordering(223) 00:09:38.508 fused_ordering(224) 00:09:38.509 fused_ordering(225) 00:09:38.509 fused_ordering(226) 00:09:38.509 fused_ordering(227) 00:09:38.509 fused_ordering(228) 00:09:38.509 fused_ordering(229) 00:09:38.509 fused_ordering(230) 00:09:38.509 fused_ordering(231) 00:09:38.509 fused_ordering(232) 00:09:38.509 fused_ordering(233) 00:09:38.509 fused_ordering(234) 00:09:38.509 fused_ordering(235) 00:09:38.509 fused_ordering(236) 00:09:38.509 fused_ordering(237) 00:09:38.509 fused_ordering(238) 00:09:38.509 fused_ordering(239) 00:09:38.509 fused_ordering(240) 00:09:38.509 fused_ordering(241) 00:09:38.509 fused_ordering(242) 00:09:38.509 fused_ordering(243) 00:09:38.509 fused_ordering(244) 00:09:38.509 fused_ordering(245) 00:09:38.509 fused_ordering(246) 00:09:38.509 fused_ordering(247) 00:09:38.509 fused_ordering(248) 00:09:38.509 fused_ordering(249) 00:09:38.509 fused_ordering(250) 00:09:38.509 fused_ordering(251) 00:09:38.509 fused_ordering(252) 00:09:38.509 fused_ordering(253) 00:09:38.509 fused_ordering(254) 00:09:38.509 fused_ordering(255) 00:09:38.509 fused_ordering(256) 00:09:38.509 fused_ordering(257) 00:09:38.509 fused_ordering(258) 00:09:38.509 fused_ordering(259) 00:09:38.509 fused_ordering(260) 00:09:38.509 fused_ordering(261) 00:09:38.509 fused_ordering(262) 00:09:38.509 fused_ordering(263) 00:09:38.509 fused_ordering(264) 00:09:38.509 fused_ordering(265) 00:09:38.509 fused_ordering(266) 00:09:38.509 fused_ordering(267) 00:09:38.509 fused_ordering(268) 00:09:38.509 fused_ordering(269) 00:09:38.509 fused_ordering(270) 00:09:38.509 fused_ordering(271) 00:09:38.509 fused_ordering(272) 00:09:38.509 fused_ordering(273) 00:09:38.509 fused_ordering(274) 00:09:38.509 fused_ordering(275) 00:09:38.509 fused_ordering(276) 00:09:38.509 fused_ordering(277) 00:09:38.509 fused_ordering(278) 00:09:38.509 fused_ordering(279) 00:09:38.509 fused_ordering(280) 00:09:38.509 fused_ordering(281) 00:09:38.509 fused_ordering(282) 00:09:38.509 fused_ordering(283) 00:09:38.509 fused_ordering(284) 00:09:38.509 fused_ordering(285) 00:09:38.509 fused_ordering(286) 00:09:38.509 fused_ordering(287) 00:09:38.509 fused_ordering(288) 00:09:38.509 fused_ordering(289) 00:09:38.509 fused_ordering(290) 00:09:38.509 fused_ordering(291) 00:09:38.509 fused_ordering(292) 00:09:38.509 fused_ordering(293) 00:09:38.509 fused_ordering(294) 00:09:38.509 fused_ordering(295) 00:09:38.509 fused_ordering(296) 00:09:38.509 fused_ordering(297) 00:09:38.509 fused_ordering(298) 00:09:38.509 fused_ordering(299) 00:09:38.509 fused_ordering(300) 00:09:38.509 fused_ordering(301) 00:09:38.509 fused_ordering(302) 00:09:38.509 fused_ordering(303) 00:09:38.509 fused_ordering(304) 00:09:38.509 fused_ordering(305) 00:09:38.509 fused_ordering(306) 00:09:38.509 fused_ordering(307) 00:09:38.509 fused_ordering(308) 00:09:38.509 fused_ordering(309) 00:09:38.509 fused_ordering(310) 00:09:38.509 fused_ordering(311) 00:09:38.509 fused_ordering(312) 00:09:38.509 fused_ordering(313) 00:09:38.509 fused_ordering(314) 00:09:38.509 fused_ordering(315) 00:09:38.509 fused_ordering(316) 00:09:38.509 fused_ordering(317) 00:09:38.509 fused_ordering(318) 00:09:38.509 fused_ordering(319) 00:09:38.509 fused_ordering(320) 00:09:38.509 fused_ordering(321) 00:09:38.509 fused_ordering(322) 00:09:38.509 fused_ordering(323) 00:09:38.509 fused_ordering(324) 00:09:38.509 fused_ordering(325) 00:09:38.509 fused_ordering(326) 00:09:38.509 fused_ordering(327) 00:09:38.509 fused_ordering(328) 00:09:38.509 fused_ordering(329) 00:09:38.509 fused_ordering(330) 00:09:38.509 fused_ordering(331) 00:09:38.509 fused_ordering(332) 00:09:38.509 fused_ordering(333) 00:09:38.509 fused_ordering(334) 00:09:38.509 fused_ordering(335) 00:09:38.509 fused_ordering(336) 00:09:38.509 fused_ordering(337) 00:09:38.509 fused_ordering(338) 00:09:38.509 fused_ordering(339) 00:09:38.509 fused_ordering(340) 00:09:38.509 fused_ordering(341) 00:09:38.509 fused_ordering(342) 00:09:38.509 fused_ordering(343) 00:09:38.509 fused_ordering(344) 00:09:38.509 fused_ordering(345) 00:09:38.509 fused_ordering(346) 00:09:38.509 fused_ordering(347) 00:09:38.509 fused_ordering(348) 00:09:38.509 fused_ordering(349) 00:09:38.509 fused_ordering(350) 00:09:38.509 fused_ordering(351) 00:09:38.509 fused_ordering(352) 00:09:38.509 fused_ordering(353) 00:09:38.509 fused_ordering(354) 00:09:38.509 fused_ordering(355) 00:09:38.509 fused_ordering(356) 00:09:38.509 fused_ordering(357) 00:09:38.509 fused_ordering(358) 00:09:38.509 fused_ordering(359) 00:09:38.509 fused_ordering(360) 00:09:38.509 fused_ordering(361) 00:09:38.509 fused_ordering(362) 00:09:38.509 fused_ordering(363) 00:09:38.509 fused_ordering(364) 00:09:38.509 fused_ordering(365) 00:09:38.509 fused_ordering(366) 00:09:38.509 fused_ordering(367) 00:09:38.509 fused_ordering(368) 00:09:38.509 fused_ordering(369) 00:09:38.509 fused_ordering(370) 00:09:38.509 fused_ordering(371) 00:09:38.509 fused_ordering(372) 00:09:38.509 fused_ordering(373) 00:09:38.509 fused_ordering(374) 00:09:38.509 fused_ordering(375) 00:09:38.509 fused_ordering(376) 00:09:38.509 fused_ordering(377) 00:09:38.509 fused_ordering(378) 00:09:38.509 fused_ordering(379) 00:09:38.509 fused_ordering(380) 00:09:38.509 fused_ordering(381) 00:09:38.509 fused_ordering(382) 00:09:38.509 fused_ordering(383) 00:09:38.509 fused_ordering(384) 00:09:38.509 fused_ordering(385) 00:09:38.509 fused_ordering(386) 00:09:38.509 fused_ordering(387) 00:09:38.509 fused_ordering(388) 00:09:38.509 fused_ordering(389) 00:09:38.509 fused_ordering(390) 00:09:38.509 fused_ordering(391) 00:09:38.509 fused_ordering(392) 00:09:38.509 fused_ordering(393) 00:09:38.509 fused_ordering(394) 00:09:38.509 fused_ordering(395) 00:09:38.509 fused_ordering(396) 00:09:38.509 fused_ordering(397) 00:09:38.509 fused_ordering(398) 00:09:38.509 fused_ordering(399) 00:09:38.509 fused_ordering(400) 00:09:38.509 fused_ordering(401) 00:09:38.509 fused_ordering(402) 00:09:38.509 fused_ordering(403) 00:09:38.509 fused_ordering(404) 00:09:38.509 fused_ordering(405) 00:09:38.509 fused_ordering(406) 00:09:38.509 fused_ordering(407) 00:09:38.509 fused_ordering(408) 00:09:38.509 fused_ordering(409) 00:09:38.509 fused_ordering(410) 00:09:38.768 fused_ordering(411) 00:09:38.768 fused_ordering(412) 00:09:38.768 fused_ordering(413) 00:09:38.768 fused_ordering(414) 00:09:38.768 fused_ordering(415) 00:09:38.768 fused_ordering(416) 00:09:38.768 fused_ordering(417) 00:09:38.768 fused_ordering(418) 00:09:38.768 fused_ordering(419) 00:09:38.768 fused_ordering(420) 00:09:38.768 fused_ordering(421) 00:09:38.768 fused_ordering(422) 00:09:38.768 fused_ordering(423) 00:09:38.768 fused_ordering(424) 00:09:38.768 fused_ordering(425) 00:09:38.768 fused_ordering(426) 00:09:38.768 fused_ordering(427) 00:09:38.768 fused_ordering(428) 00:09:38.768 fused_ordering(429) 00:09:38.768 fused_ordering(430) 00:09:38.768 fused_ordering(431) 00:09:38.768 fused_ordering(432) 00:09:38.768 fused_ordering(433) 00:09:38.768 fused_ordering(434) 00:09:38.768 fused_ordering(435) 00:09:38.768 fused_ordering(436) 00:09:38.768 fused_ordering(437) 00:09:38.768 fused_ordering(438) 00:09:38.768 fused_ordering(439) 00:09:38.768 fused_ordering(440) 00:09:38.768 fused_ordering(441) 00:09:38.768 fused_ordering(442) 00:09:38.769 fused_ordering(443) 00:09:38.769 fused_ordering(444) 00:09:38.769 fused_ordering(445) 00:09:38.769 fused_ordering(446) 00:09:38.769 fused_ordering(447) 00:09:38.769 fused_ordering(448) 00:09:38.769 fused_ordering(449) 00:09:38.769 fused_ordering(450) 00:09:38.769 fused_ordering(451) 00:09:38.769 fused_ordering(452) 00:09:38.769 fused_ordering(453) 00:09:38.769 fused_ordering(454) 00:09:38.769 fused_ordering(455) 00:09:38.769 fused_ordering(456) 00:09:38.769 fused_ordering(457) 00:09:38.769 fused_ordering(458) 00:09:38.769 fused_ordering(459) 00:09:38.769 fused_ordering(460) 00:09:38.769 fused_ordering(461) 00:09:38.769 fused_ordering(462) 00:09:38.769 fused_ordering(463) 00:09:38.769 fused_ordering(464) 00:09:38.769 fused_ordering(465) 00:09:38.769 fused_ordering(466) 00:09:38.769 fused_ordering(467) 00:09:38.769 fused_ordering(468) 00:09:38.769 fused_ordering(469) 00:09:38.769 fused_ordering(470) 00:09:38.769 fused_ordering(471) 00:09:38.769 fused_ordering(472) 00:09:38.769 fused_ordering(473) 00:09:38.769 fused_ordering(474) 00:09:38.769 fused_ordering(475) 00:09:38.769 fused_ordering(476) 00:09:38.769 fused_ordering(477) 00:09:38.769 fused_ordering(478) 00:09:38.769 fused_ordering(479) 00:09:38.769 fused_ordering(480) 00:09:38.769 fused_ordering(481) 00:09:38.769 fused_ordering(482) 00:09:38.769 fused_ordering(483) 00:09:38.769 fused_ordering(484) 00:09:38.769 fused_ordering(485) 00:09:38.769 fused_ordering(486) 00:09:38.769 fused_ordering(487) 00:09:38.769 fused_ordering(488) 00:09:38.769 fused_ordering(489) 00:09:38.769 fused_ordering(490) 00:09:38.769 fused_ordering(491) 00:09:38.769 fused_ordering(492) 00:09:38.769 fused_ordering(493) 00:09:38.769 fused_ordering(494) 00:09:38.769 fused_ordering(495) 00:09:38.769 fused_ordering(496) 00:09:38.769 fused_ordering(497) 00:09:38.769 fused_ordering(498) 00:09:38.769 fused_ordering(499) 00:09:38.769 fused_ordering(500) 00:09:38.769 fused_ordering(501) 00:09:38.769 fused_ordering(502) 00:09:38.769 fused_ordering(503) 00:09:38.769 fused_ordering(504) 00:09:38.769 fused_ordering(505) 00:09:38.769 fused_ordering(506) 00:09:38.769 fused_ordering(507) 00:09:38.769 fused_ordering(508) 00:09:38.769 fused_ordering(509) 00:09:38.769 fused_ordering(510) 00:09:38.769 fused_ordering(511) 00:09:38.769 fused_ordering(512) 00:09:38.769 fused_ordering(513) 00:09:38.769 fused_ordering(514) 00:09:38.769 fused_ordering(515) 00:09:38.769 fused_ordering(516) 00:09:38.769 fused_ordering(517) 00:09:38.769 fused_ordering(518) 00:09:38.769 fused_ordering(519) 00:09:38.769 fused_ordering(520) 00:09:38.769 fused_ordering(521) 00:09:38.769 fused_ordering(522) 00:09:38.769 fused_ordering(523) 00:09:38.769 fused_ordering(524) 00:09:38.769 fused_ordering(525) 00:09:38.769 fused_ordering(526) 00:09:38.769 fused_ordering(527) 00:09:38.769 fused_ordering(528) 00:09:38.769 fused_ordering(529) 00:09:38.769 fused_ordering(530) 00:09:38.769 fused_ordering(531) 00:09:38.769 fused_ordering(532) 00:09:38.769 fused_ordering(533) 00:09:38.769 fused_ordering(534) 00:09:38.769 fused_ordering(535) 00:09:38.769 fused_ordering(536) 00:09:38.769 fused_ordering(537) 00:09:38.769 fused_ordering(538) 00:09:38.769 fused_ordering(539) 00:09:38.769 fused_ordering(540) 00:09:38.769 fused_ordering(541) 00:09:38.769 fused_ordering(542) 00:09:38.769 fused_ordering(543) 00:09:38.769 fused_ordering(544) 00:09:38.769 fused_ordering(545) 00:09:38.769 fused_ordering(546) 00:09:38.769 fused_ordering(547) 00:09:38.769 fused_ordering(548) 00:09:38.769 fused_ordering(549) 00:09:38.769 fused_ordering(550) 00:09:38.769 fused_ordering(551) 00:09:38.769 fused_ordering(552) 00:09:38.769 fused_ordering(553) 00:09:38.769 fused_ordering(554) 00:09:38.769 fused_ordering(555) 00:09:38.769 fused_ordering(556) 00:09:38.769 fused_ordering(557) 00:09:38.769 fused_ordering(558) 00:09:38.769 fused_ordering(559) 00:09:38.769 fused_ordering(560) 00:09:38.769 fused_ordering(561) 00:09:38.769 fused_ordering(562) 00:09:38.769 fused_ordering(563) 00:09:38.769 fused_ordering(564) 00:09:38.769 fused_ordering(565) 00:09:38.769 fused_ordering(566) 00:09:38.769 fused_ordering(567) 00:09:38.769 fused_ordering(568) 00:09:38.769 fused_ordering(569) 00:09:38.769 fused_ordering(570) 00:09:38.769 fused_ordering(571) 00:09:38.769 fused_ordering(572) 00:09:38.769 fused_ordering(573) 00:09:38.769 fused_ordering(574) 00:09:38.769 fused_ordering(575) 00:09:38.769 fused_ordering(576) 00:09:38.769 fused_ordering(577) 00:09:38.769 fused_ordering(578) 00:09:38.769 fused_ordering(579) 00:09:38.769 fused_ordering(580) 00:09:38.769 fused_ordering(581) 00:09:38.769 fused_ordering(582) 00:09:38.769 fused_ordering(583) 00:09:38.769 fused_ordering(584) 00:09:38.769 fused_ordering(585) 00:09:38.769 fused_ordering(586) 00:09:38.769 fused_ordering(587) 00:09:38.769 fused_ordering(588) 00:09:38.769 fused_ordering(589) 00:09:38.769 fused_ordering(590) 00:09:38.769 fused_ordering(591) 00:09:38.769 fused_ordering(592) 00:09:38.769 fused_ordering(593) 00:09:38.769 fused_ordering(594) 00:09:38.769 fused_ordering(595) 00:09:38.769 fused_ordering(596) 00:09:38.769 fused_ordering(597) 00:09:38.769 fused_ordering(598) 00:09:38.769 fused_ordering(599) 00:09:38.769 fused_ordering(600) 00:09:38.769 fused_ordering(601) 00:09:38.769 fused_ordering(602) 00:09:38.769 fused_ordering(603) 00:09:38.769 fused_ordering(604) 00:09:38.769 fused_ordering(605) 00:09:38.769 fused_ordering(606) 00:09:38.769 fused_ordering(607) 00:09:38.769 fused_ordering(608) 00:09:38.769 fused_ordering(609) 00:09:38.769 fused_ordering(610) 00:09:38.769 fused_ordering(611) 00:09:38.769 fused_ordering(612) 00:09:38.769 fused_ordering(613) 00:09:38.769 fused_ordering(614) 00:09:38.769 fused_ordering(615) 00:09:39.338 fused_ordering(616) 00:09:39.338 fused_ordering(617) 00:09:39.338 fused_ordering(618) 00:09:39.338 fused_ordering(619) 00:09:39.338 fused_ordering(620) 00:09:39.338 fused_ordering(621) 00:09:39.338 fused_ordering(622) 00:09:39.338 fused_ordering(623) 00:09:39.338 fused_ordering(624) 00:09:39.338 fused_ordering(625) 00:09:39.338 fused_ordering(626) 00:09:39.338 fused_ordering(627) 00:09:39.338 fused_ordering(628) 00:09:39.338 fused_ordering(629) 00:09:39.338 fused_ordering(630) 00:09:39.338 fused_ordering(631) 00:09:39.338 fused_ordering(632) 00:09:39.338 fused_ordering(633) 00:09:39.338 fused_ordering(634) 00:09:39.338 fused_ordering(635) 00:09:39.338 fused_ordering(636) 00:09:39.338 fused_ordering(637) 00:09:39.338 fused_ordering(638) 00:09:39.338 fused_ordering(639) 00:09:39.338 fused_ordering(640) 00:09:39.338 fused_ordering(641) 00:09:39.338 fused_ordering(642) 00:09:39.338 fused_ordering(643) 00:09:39.338 fused_ordering(644) 00:09:39.338 fused_ordering(645) 00:09:39.338 fused_ordering(646) 00:09:39.338 fused_ordering(647) 00:09:39.338 fused_ordering(648) 00:09:39.338 fused_ordering(649) 00:09:39.338 fused_ordering(650) 00:09:39.338 fused_ordering(651) 00:09:39.338 fused_ordering(652) 00:09:39.338 fused_ordering(653) 00:09:39.338 fused_ordering(654) 00:09:39.338 fused_ordering(655) 00:09:39.338 fused_ordering(656) 00:09:39.338 fused_ordering(657) 00:09:39.338 fused_ordering(658) 00:09:39.338 fused_ordering(659) 00:09:39.338 fused_ordering(660) 00:09:39.338 fused_ordering(661) 00:09:39.338 fused_ordering(662) 00:09:39.338 fused_ordering(663) 00:09:39.338 fused_ordering(664) 00:09:39.338 fused_ordering(665) 00:09:39.338 fused_ordering(666) 00:09:39.338 fused_ordering(667) 00:09:39.338 fused_ordering(668) 00:09:39.338 fused_ordering(669) 00:09:39.338 fused_ordering(670) 00:09:39.338 fused_ordering(671) 00:09:39.338 fused_ordering(672) 00:09:39.338 fused_ordering(673) 00:09:39.338 fused_ordering(674) 00:09:39.338 fused_ordering(675) 00:09:39.338 fused_ordering(676) 00:09:39.338 fused_ordering(677) 00:09:39.338 fused_ordering(678) 00:09:39.338 fused_ordering(679) 00:09:39.338 fused_ordering(680) 00:09:39.338 fused_ordering(681) 00:09:39.338 fused_ordering(682) 00:09:39.338 fused_ordering(683) 00:09:39.338 fused_ordering(684) 00:09:39.338 fused_ordering(685) 00:09:39.338 fused_ordering(686) 00:09:39.338 fused_ordering(687) 00:09:39.338 fused_ordering(688) 00:09:39.338 fused_ordering(689) 00:09:39.338 fused_ordering(690) 00:09:39.338 fused_ordering(691) 00:09:39.338 fused_ordering(692) 00:09:39.338 fused_ordering(693) 00:09:39.338 fused_ordering(694) 00:09:39.338 fused_ordering(695) 00:09:39.338 fused_ordering(696) 00:09:39.338 fused_ordering(697) 00:09:39.338 fused_ordering(698) 00:09:39.338 fused_ordering(699) 00:09:39.338 fused_ordering(700) 00:09:39.338 fused_ordering(701) 00:09:39.338 fused_ordering(702) 00:09:39.338 fused_ordering(703) 00:09:39.338 fused_ordering(704) 00:09:39.338 fused_ordering(705) 00:09:39.338 fused_ordering(706) 00:09:39.338 fused_ordering(707) 00:09:39.338 fused_ordering(708) 00:09:39.338 fused_ordering(709) 00:09:39.338 fused_ordering(710) 00:09:39.338 fused_ordering(711) 00:09:39.338 fused_ordering(712) 00:09:39.338 fused_ordering(713) 00:09:39.338 fused_ordering(714) 00:09:39.338 fused_ordering(715) 00:09:39.338 fused_ordering(716) 00:09:39.338 fused_ordering(717) 00:09:39.338 fused_ordering(718) 00:09:39.338 fused_ordering(719) 00:09:39.338 fused_ordering(720) 00:09:39.338 fused_ordering(721) 00:09:39.338 fused_ordering(722) 00:09:39.338 fused_ordering(723) 00:09:39.338 fused_ordering(724) 00:09:39.338 fused_ordering(725) 00:09:39.338 fused_ordering(726) 00:09:39.338 fused_ordering(727) 00:09:39.338 fused_ordering(728) 00:09:39.338 fused_ordering(729) 00:09:39.338 fused_ordering(730) 00:09:39.338 fused_ordering(731) 00:09:39.338 fused_ordering(732) 00:09:39.338 fused_ordering(733) 00:09:39.338 fused_ordering(734) 00:09:39.338 fused_ordering(735) 00:09:39.338 fused_ordering(736) 00:09:39.338 fused_ordering(737) 00:09:39.338 fused_ordering(738) 00:09:39.338 fused_ordering(739) 00:09:39.338 fused_ordering(740) 00:09:39.338 fused_ordering(741) 00:09:39.338 fused_ordering(742) 00:09:39.338 fused_ordering(743) 00:09:39.338 fused_ordering(744) 00:09:39.338 fused_ordering(745) 00:09:39.338 fused_ordering(746) 00:09:39.338 fused_ordering(747) 00:09:39.338 fused_ordering(748) 00:09:39.338 fused_ordering(749) 00:09:39.338 fused_ordering(750) 00:09:39.338 fused_ordering(751) 00:09:39.338 fused_ordering(752) 00:09:39.338 fused_ordering(753) 00:09:39.338 fused_ordering(754) 00:09:39.338 fused_ordering(755) 00:09:39.338 fused_ordering(756) 00:09:39.338 fused_ordering(757) 00:09:39.338 fused_ordering(758) 00:09:39.338 fused_ordering(759) 00:09:39.338 fused_ordering(760) 00:09:39.338 fused_ordering(761) 00:09:39.338 fused_ordering(762) 00:09:39.338 fused_ordering(763) 00:09:39.338 fused_ordering(764) 00:09:39.338 fused_ordering(765) 00:09:39.338 fused_ordering(766) 00:09:39.338 fused_ordering(767) 00:09:39.338 fused_ordering(768) 00:09:39.338 fused_ordering(769) 00:09:39.338 fused_ordering(770) 00:09:39.338 fused_ordering(771) 00:09:39.338 fused_ordering(772) 00:09:39.338 fused_ordering(773) 00:09:39.338 fused_ordering(774) 00:09:39.338 fused_ordering(775) 00:09:39.338 fused_ordering(776) 00:09:39.338 fused_ordering(777) 00:09:39.338 fused_ordering(778) 00:09:39.338 fused_ordering(779) 00:09:39.338 fused_ordering(780) 00:09:39.338 fused_ordering(781) 00:09:39.338 fused_ordering(782) 00:09:39.338 fused_ordering(783) 00:09:39.338 fused_ordering(784) 00:09:39.338 fused_ordering(785) 00:09:39.339 fused_ordering(786) 00:09:39.339 fused_ordering(787) 00:09:39.339 fused_ordering(788) 00:09:39.339 fused_ordering(789) 00:09:39.339 fused_ordering(790) 00:09:39.339 fused_ordering(791) 00:09:39.339 fused_ordering(792) 00:09:39.339 fused_ordering(793) 00:09:39.339 fused_ordering(794) 00:09:39.339 fused_ordering(795) 00:09:39.339 fused_ordering(796) 00:09:39.339 fused_ordering(797) 00:09:39.339 fused_ordering(798) 00:09:39.339 fused_ordering(799) 00:09:39.339 fused_ordering(800) 00:09:39.339 fused_ordering(801) 00:09:39.339 fused_ordering(802) 00:09:39.339 fused_ordering(803) 00:09:39.339 fused_ordering(804) 00:09:39.339 fused_ordering(805) 00:09:39.339 fused_ordering(806) 00:09:39.339 fused_ordering(807) 00:09:39.339 fused_ordering(808) 00:09:39.339 fused_ordering(809) 00:09:39.339 fused_ordering(810) 00:09:39.339 fused_ordering(811) 00:09:39.339 fused_ordering(812) 00:09:39.339 fused_ordering(813) 00:09:39.339 fused_ordering(814) 00:09:39.339 fused_ordering(815) 00:09:39.339 fused_ordering(816) 00:09:39.339 fused_ordering(817) 00:09:39.339 fused_ordering(818) 00:09:39.339 fused_ordering(819) 00:09:39.339 fused_ordering(820) 00:09:39.907 fused_ordering(821) 00:09:39.907 fused_ordering(822) 00:09:39.907 fused_ordering(823) 00:09:39.907 fused_ordering(824) 00:09:39.907 fused_ordering(825) 00:09:39.907 fused_ordering(826) 00:09:39.907 fused_ordering(827) 00:09:39.907 fused_ordering(828) 00:09:39.907 fused_ordering(829) 00:09:39.907 fused_ordering(830) 00:09:39.907 fused_ordering(831) 00:09:39.907 fused_ordering(832) 00:09:39.908 fused_ordering(833) 00:09:39.908 fused_ordering(834) 00:09:39.908 fused_ordering(835) 00:09:39.908 fused_ordering(836) 00:09:39.908 fused_ordering(837) 00:09:39.908 fused_ordering(838) 00:09:39.908 fused_ordering(839) 00:09:39.908 fused_ordering(840) 00:09:39.908 fused_ordering(841) 00:09:39.908 fused_ordering(842) 00:09:39.908 fused_ordering(843) 00:09:39.908 fused_ordering(844) 00:09:39.908 fused_ordering(845) 00:09:39.908 fused_ordering(846) 00:09:39.908 fused_ordering(847) 00:09:39.908 fused_ordering(848) 00:09:39.908 fused_ordering(849) 00:09:39.908 fused_ordering(850) 00:09:39.908 fused_ordering(851) 00:09:39.908 fused_ordering(852) 00:09:39.908 fused_ordering(853) 00:09:39.908 fused_ordering(854) 00:09:39.908 fused_ordering(855) 00:09:39.908 fused_ordering(856) 00:09:39.908 fused_ordering(857) 00:09:39.908 fused_ordering(858) 00:09:39.908 fused_ordering(859) 00:09:39.908 fused_ordering(860) 00:09:39.908 fused_ordering(861) 00:09:39.908 fused_ordering(862) 00:09:39.908 fused_ordering(863) 00:09:39.908 fused_ordering(864) 00:09:39.908 fused_ordering(865) 00:09:39.908 fused_ordering(866) 00:09:39.908 fused_ordering(867) 00:09:39.908 fused_ordering(868) 00:09:39.908 fused_ordering(869) 00:09:39.908 fused_ordering(870) 00:09:39.908 fused_ordering(871) 00:09:39.908 fused_ordering(872) 00:09:39.908 fused_ordering(873) 00:09:39.908 fused_ordering(874) 00:09:39.908 fused_ordering(875) 00:09:39.908 fused_ordering(876) 00:09:39.908 fused_ordering(877) 00:09:39.908 fused_ordering(878) 00:09:39.908 fused_ordering(879) 00:09:39.908 fused_ordering(880) 00:09:39.908 fused_ordering(881) 00:09:39.908 fused_ordering(882) 00:09:39.908 fused_ordering(883) 00:09:39.908 fused_ordering(884) 00:09:39.908 fused_ordering(885) 00:09:39.908 fused_ordering(886) 00:09:39.908 fused_ordering(887) 00:09:39.908 fused_ordering(888) 00:09:39.908 fused_ordering(889) 00:09:39.908 fused_ordering(890) 00:09:39.908 fused_ordering(891) 00:09:39.908 fused_ordering(892) 00:09:39.908 fused_ordering(893) 00:09:39.908 fused_ordering(894) 00:09:39.908 fused_ordering(895) 00:09:39.908 fused_ordering(896) 00:09:39.908 fused_ordering(897) 00:09:39.908 fused_ordering(898) 00:09:39.908 fused_ordering(899) 00:09:39.908 fused_ordering(900) 00:09:39.908 fused_ordering(901) 00:09:39.908 fused_ordering(902) 00:09:39.908 fused_ordering(903) 00:09:39.908 fused_ordering(904) 00:09:39.908 fused_ordering(905) 00:09:39.908 fused_ordering(906) 00:09:39.908 fused_ordering(907) 00:09:39.908 fused_ordering(908) 00:09:39.908 fused_ordering(909) 00:09:39.908 fused_ordering(910) 00:09:39.908 fused_ordering(911) 00:09:39.908 fused_ordering(912) 00:09:39.908 fused_ordering(913) 00:09:39.908 fused_ordering(914) 00:09:39.908 fused_ordering(915) 00:09:39.908 fused_ordering(916) 00:09:39.908 fused_ordering(917) 00:09:39.908 fused_ordering(918) 00:09:39.908 fused_ordering(919) 00:09:39.908 fused_ordering(920) 00:09:39.908 fused_ordering(921) 00:09:39.908 fused_ordering(922) 00:09:39.908 fused_ordering(923) 00:09:39.908 fused_ordering(924) 00:09:39.908 fused_ordering(925) 00:09:39.908 fused_ordering(926) 00:09:39.908 fused_ordering(927) 00:09:39.908 fused_ordering(928) 00:09:39.908 fused_ordering(929) 00:09:39.908 fused_ordering(930) 00:09:39.908 fused_ordering(931) 00:09:39.908 fused_ordering(932) 00:09:39.908 fused_ordering(933) 00:09:39.908 fused_ordering(934) 00:09:39.908 fused_ordering(935) 00:09:39.908 fused_ordering(936) 00:09:39.908 fused_ordering(937) 00:09:39.908 fused_ordering(938) 00:09:39.908 fused_ordering(939) 00:09:39.908 fused_ordering(940) 00:09:39.908 fused_ordering(941) 00:09:39.908 fused_ordering(942) 00:09:39.908 fused_ordering(943) 00:09:39.908 fused_ordering(944) 00:09:39.908 fused_ordering(945) 00:09:39.908 fused_ordering(946) 00:09:39.908 fused_ordering(947) 00:09:39.908 fused_ordering(948) 00:09:39.908 fused_ordering(949) 00:09:39.908 fused_ordering(950) 00:09:39.908 fused_ordering(951) 00:09:39.908 fused_ordering(952) 00:09:39.908 fused_ordering(953) 00:09:39.908 fused_ordering(954) 00:09:39.908 fused_ordering(955) 00:09:39.908 fused_ordering(956) 00:09:39.908 fused_ordering(957) 00:09:39.908 fused_ordering(958) 00:09:39.908 fused_ordering(959) 00:09:39.908 fused_ordering(960) 00:09:39.908 fused_ordering(961) 00:09:39.908 fused_ordering(962) 00:09:39.908 fused_ordering(963) 00:09:39.908 fused_ordering(964) 00:09:39.908 fused_ordering(965) 00:09:39.908 fused_ordering(966) 00:09:39.908 fused_ordering(967) 00:09:39.908 fused_ordering(968) 00:09:39.908 fused_ordering(969) 00:09:39.908 fused_ordering(970) 00:09:39.908 fused_ordering(971) 00:09:39.908 fused_ordering(972) 00:09:39.908 fused_ordering(973) 00:09:39.908 fused_ordering(974) 00:09:39.908 fused_ordering(975) 00:09:39.908 fused_ordering(976) 00:09:39.908 fused_ordering(977) 00:09:39.908 fused_ordering(978) 00:09:39.908 fused_ordering(979) 00:09:39.908 fused_ordering(980) 00:09:39.908 fused_ordering(981) 00:09:39.908 fused_ordering(982) 00:09:39.908 fused_ordering(983) 00:09:39.908 fused_ordering(984) 00:09:39.908 fused_ordering(985) 00:09:39.908 fused_ordering(986) 00:09:39.908 fused_ordering(987) 00:09:39.908 fused_ordering(988) 00:09:39.908 fused_ordering(989) 00:09:39.908 fused_ordering(990) 00:09:39.908 fused_ordering(991) 00:09:39.908 fused_ordering(992) 00:09:39.908 fused_ordering(993) 00:09:39.908 fused_ordering(994) 00:09:39.908 fused_ordering(995) 00:09:39.908 fused_ordering(996) 00:09:39.908 fused_ordering(997) 00:09:39.908 fused_ordering(998) 00:09:39.908 fused_ordering(999) 00:09:39.908 fused_ordering(1000) 00:09:39.908 fused_ordering(1001) 00:09:39.908 fused_ordering(1002) 00:09:39.908 fused_ordering(1003) 00:09:39.908 fused_ordering(1004) 00:09:39.908 fused_ordering(1005) 00:09:39.908 fused_ordering(1006) 00:09:39.908 fused_ordering(1007) 00:09:39.908 fused_ordering(1008) 00:09:39.908 fused_ordering(1009) 00:09:39.908 fused_ordering(1010) 00:09:39.908 fused_ordering(1011) 00:09:39.908 fused_ordering(1012) 00:09:39.908 fused_ordering(1013) 00:09:39.908 fused_ordering(1014) 00:09:39.908 fused_ordering(1015) 00:09:39.908 fused_ordering(1016) 00:09:39.908 fused_ordering(1017) 00:09:39.908 fused_ordering(1018) 00:09:39.908 fused_ordering(1019) 00:09:39.908 fused_ordering(1020) 00:09:39.908 fused_ordering(1021) 00:09:39.908 fused_ordering(1022) 00:09:39.908 fused_ordering(1023) 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:39.908 rmmod nvme_tcp 00:09:39.908 rmmod nvme_fabrics 00:09:39.908 rmmod nvme_keyring 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71268 ']' 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71268 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71268 ']' 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71268 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71268 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:39.908 killing process with pid 71268 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71268' 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71268 00:09:39.908 15:32:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71268 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:40.167 00:09:40.167 real 0m3.260s 00:09:40.167 user 0m3.910s 00:09:40.167 sys 0m1.248s 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.167 ************************************ 00:09:40.167 15:32:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:40.167 END TEST nvmf_fused_ordering 00:09:40.167 ************************************ 00:09:40.167 15:32:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:40.167 15:32:35 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:40.167 15:32:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.167 15:32:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.167 15:32:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.167 ************************************ 00:09:40.167 START TEST nvmf_delete_subsystem 00:09:40.167 ************************************ 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:40.168 * Looking for test storage... 00:09:40.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.168 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:40.426 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:40.427 Cannot find device "nvmf_tgt_br" 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.427 Cannot find device "nvmf_tgt_br2" 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:40.427 Cannot find device "nvmf_tgt_br" 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:40.427 Cannot find device "nvmf_tgt_br2" 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:40.427 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:40.685 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:40.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:09:40.686 00:09:40.686 --- 10.0.0.2 ping statistics --- 00:09:40.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.686 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:40.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:40.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:09:40.686 00:09:40.686 --- 10.0.0.3 ping statistics --- 00:09:40.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.686 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:40.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:40.686 00:09:40.686 --- 10.0.0.1 ping statistics --- 00:09:40.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.686 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71484 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71484 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71484 ']' 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.686 15:32:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:40.686 [2024-07-15 15:32:35.738060] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:40.686 [2024-07-15 15:32:35.738175] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.944 [2024-07-15 15:32:35.879305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.944 [2024-07-15 15:32:35.949063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.944 [2024-07-15 15:32:35.949134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.944 [2024-07-15 15:32:35.949160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.944 [2024-07-15 15:32:35.949170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.944 [2024-07-15 15:32:35.949179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.944 [2024-07-15 15:32:35.949354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.944 [2024-07-15 15:32:35.949367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.881 [2024-07-15 15:32:36.807298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.881 [2024-07-15 15:32:36.827377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.881 NULL1 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.881 Delay0 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71539 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:41.881 15:32:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:42.139 [2024-07-15 15:32:37.028234] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:44.043 15:32:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.043 15:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.043 15:32:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 [2024-07-15 15:32:39.064113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e74c0 is same with the state(5) to be set 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 starting I/O failed: -6 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.043 Read completed with error (sct=0, sc=8) 00:09:44.043 Write completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 starting I/O failed: -6 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 starting I/O failed: -6 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 starting I/O failed: -6 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 starting I/O failed: -6 00:09:44.044 [2024-07-15 15:32:39.066301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc3e8000c00 is same with the state(5) to be set 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.044 Write completed with error (sct=0, sc=8) 00:09:44.044 Read completed with error (sct=0, sc=8) 00:09:44.981 [2024-07-15 15:32:40.042198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5510 is same with the state(5) to be set 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 [2024-07-15 15:32:40.063840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8a80 is same with the state(5) to be set 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 [2024-07-15 15:32:40.064633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc3e800cfe0 is same with the state(5) to be set 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 [2024-07-15 15:32:40.064878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c58d0 is same with the state(5) to be set 00:09:44.981 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:44.981 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71539 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 Read completed with error (sct=0, sc=8) 00:09:44.981 Write completed with error (sct=0, sc=8) 00:09:44.981 [2024-07-15 15:32:40.066742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc3e800d740 is same with the state(5) to be set 00:09:44.981 Initializing NVMe Controllers 00:09:44.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:44.981 Controller IO queue size 128, less than required. 00:09:44.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:44.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:44.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:44.981 Initialization complete. Launching workers. 00:09:44.981 ======================================================== 00:09:44.981 Latency(us) 00:09:44.981 Device Information : IOPS MiB/s Average min max 00:09:44.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.73 0.09 881498.94 479.86 1011820.13 00:09:44.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.26 0.08 888325.27 1878.96 1012599.57 00:09:44.981 ======================================================== 00:09:44.981 Total : 349.99 0.17 884878.22 479.86 1012599.57 00:09:44.981 00:09:44.981 [2024-07-15 15:32:40.067693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c5510 (9): Bad file descriptor 00:09:44.981 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71539 00:09:45.549 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71539) - No such process 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71539 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71539 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71539 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 [2024-07-15 15:32:40.588963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71586 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71586 00:09:45.549 15:32:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:45.807 [2024-07-15 15:32:40.761856] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:46.065 15:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:46.065 15:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71586 00:09:46.065 15:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:46.632 15:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:46.632 15:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71586 00:09:46.632 15:32:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:47.199 15:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:47.199 15:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71586 00:09:47.199 15:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:47.764 15:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:47.764 15:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71586 00:09:47.764 15:32:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:48.023 15:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:48.023 15:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71586 00:09:48.023 15:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:48.587 15:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:48.587 15:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71586 00:09:48.587 15:32:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:48.845 Initializing NVMe Controllers 00:09:48.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:48.845 Controller IO queue size 128, less than required. 00:09:48.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:48.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:48.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:48.845 Initialization complete. Launching workers. 00:09:48.845 ======================================================== 00:09:48.845 Latency(us) 00:09:48.845 Device Information : IOPS MiB/s Average min max 00:09:48.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003162.65 1000143.43 1010767.74 00:09:48.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005625.74 1000302.30 1014209.77 00:09:48.845 ======================================================== 00:09:48.845 Total : 256.00 0.12 1004394.20 1000143.43 1014209.77 00:09:48.845 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71586 00:09:49.104 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71586) - No such process 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71586 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.104 rmmod nvme_tcp 00:09:49.104 rmmod nvme_fabrics 00:09:49.104 rmmod nvme_keyring 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71484 ']' 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71484 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71484 ']' 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71484 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.104 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71484 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.364 killing process with pid 71484 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71484' 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71484 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71484 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:49.364 00:09:49.364 real 0m9.238s 00:09:49.364 user 0m28.661s 00:09:49.364 sys 0m1.499s 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.364 15:32:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.364 ************************************ 00:09:49.364 END TEST nvmf_delete_subsystem 00:09:49.364 ************************************ 00:09:49.364 15:32:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:49.364 15:32:44 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:49.364 15:32:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:49.364 15:32:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.364 15:32:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.364 ************************************ 00:09:49.364 START TEST nvmf_ns_masking 00:09:49.364 ************************************ 00:09:49.364 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:49.624 * Looking for test storage... 00:09:49.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d176ccf5-f1e1-4cc4-97af-33f5014cb4cb 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=25a18393-3b28-4ed9-a262-31976fdc56ea 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=46b584b1-5af4-456f-81cc-a303abb36467 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:49.624 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:49.625 Cannot find device "nvmf_tgt_br" 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:49.625 Cannot find device "nvmf_tgt_br2" 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:49.625 Cannot find device "nvmf_tgt_br" 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:49.625 Cannot find device "nvmf_tgt_br2" 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:49.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:49.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:49.625 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:49.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:49.884 00:09:49.884 --- 10.0.0.2 ping statistics --- 00:09:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.884 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:49.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:49.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:49.884 00:09:49.884 --- 10.0.0.3 ping statistics --- 00:09:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.884 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:49.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:49.884 00:09:49.884 --- 10.0.0.1 ping statistics --- 00:09:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.884 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=71819 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 71819 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 71819 ']' 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.884 15:32:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:49.884 [2024-07-15 15:32:44.985910] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:49.884 [2024-07-15 15:32:44.986054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.144 [2024-07-15 15:32:45.127633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.144 [2024-07-15 15:32:45.198603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.144 [2024-07-15 15:32:45.198655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.144 [2024-07-15 15:32:45.198669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.144 [2024-07-15 15:32:45.198678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.144 [2024-07-15 15:32:45.198687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.144 [2024-07-15 15:32:45.198714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.081 15:32:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.081 15:32:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:51.081 15:32:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.081 15:32:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:51.081 15:32:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:51.081 15:32:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.081 15:32:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:51.340 [2024-07-15 15:32:46.293009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.340 15:32:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:51.340 15:32:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:51.340 15:32:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:51.599 Malloc1 00:09:51.599 15:32:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:51.857 Malloc2 00:09:51.857 15:32:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.144 15:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:52.403 15:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.662 [2024-07-15 15:32:47.667574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.662 15:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:52.662 15:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 46b584b1-5af4-456f-81cc-a303abb36467 -a 10.0.0.2 -s 4420 -i 4 00:09:52.662 15:32:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:52.662 15:32:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:52.662 15:32:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.662 15:32:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:52.662 15:32:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:55.194 [ 0]:0x1 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b2f73024fc34de185b2c75d70ff8af9 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b2f73024fc34de185b2c75d70ff8af9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:55.194 15:32:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:55.194 [ 0]:0x1 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b2f73024fc34de185b2c75d70ff8af9 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b2f73024fc34de185b2c75d70ff8af9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:55.194 [ 1]:0x2 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:55.194 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:55.453 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9322a376ca8443d3ba5ca96a8997d913 00:09:55.453 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9322a376ca8443d3ba5ca96a8997d913 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:55.453 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:55.453 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.453 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.712 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:55.970 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:55.970 15:32:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 46b584b1-5af4-456f-81cc-a303abb36467 -a 10.0.0.2 -s 4420 -i 4 00:09:55.970 15:32:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:55.970 15:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:55.970 15:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.970 15:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:55.970 15:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:55.970 15:32:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:58.529 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:58.529 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:58.529 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:58.530 [ 0]:0x2 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9322a376ca8443d3ba5ca96a8997d913 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9322a376ca8443d3ba5ca96a8997d913 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:58.530 [ 0]:0x1 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b2f73024fc34de185b2c75d70ff8af9 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b2f73024fc34de185b2c75d70ff8af9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:58.530 [ 1]:0x2 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:58.530 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.788 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9322a376ca8443d3ba5ca96a8997d913 00:09:58.788 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9322a376ca8443d3ba5ca96a8997d913 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.788 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:59.046 15:32:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:59.046 [ 0]:0x2 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9322a376ca8443d3ba5ca96a8997d913 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9322a376ca8443d3ba5ca96a8997d913 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.046 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:59.614 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:59.614 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 46b584b1-5af4-456f-81cc-a303abb36467 -a 10.0.0.2 -s 4420 -i 4 00:09:59.614 15:32:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:59.614 15:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:59.614 15:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.614 15:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:59.614 15:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:59.614 15:32:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:01.517 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:01.776 [ 0]:0x1 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b2f73024fc34de185b2c75d70ff8af9 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b2f73024fc34de185b2c75d70ff8af9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:01.776 [ 1]:0x2 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9322a376ca8443d3ba5ca96a8997d913 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9322a376ca8443d3ba5ca96a8997d913 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:01.776 15:32:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:02.034 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:02.034 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:02.035 [ 0]:0x2 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9322a376ca8443d3ba5ca96a8997d913 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9322a376ca8443d3ba5ca96a8997d913 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:02.035 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:02.293 [2024-07-15 15:32:57.422348] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:02.551 2024/07/15 15:32:57 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:10:02.551 request: 00:10:02.551 { 00:10:02.551 "method": "nvmf_ns_remove_host", 00:10:02.551 "params": { 00:10:02.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.551 "nsid": 2, 00:10:02.551 "host": "nqn.2016-06.io.spdk:host1" 00:10:02.551 } 00:10:02.551 } 00:10:02.551 Got JSON-RPC error response 00:10:02.551 GoRPCClient: error on JSON-RPC call 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:02.551 [ 0]:0x2 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9322a376ca8443d3ba5ca96a8997d913 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9322a376ca8443d3ba5ca96a8997d913 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72207 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72207 /var/tmp/host.sock 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72207 ']' 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.551 15:32:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:02.809 [2024-07-15 15:32:57.681360] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:02.809 [2024-07-15 15:32:57.681472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72207 ] 00:10:02.809 [2024-07-15 15:32:57.817320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.809 [2024-07-15 15:32:57.886997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.067 15:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.067 15:32:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:03.067 15:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.325 15:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.584 15:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d176ccf5-f1e1-4cc4-97af-33f5014cb4cb 00:10:03.584 15:32:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:03.584 15:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D176CCF5F1E14CC497AF33F5014CB4CB -i 00:10:03.843 15:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 25a18393-3b28-4ed9-a262-31976fdc56ea 00:10:03.843 15:32:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:03.843 15:32:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 25A183933B284ED9A26231976FDC56EA -i 00:10:04.101 15:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:04.360 15:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:04.618 15:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:04.618 15:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:04.876 nvme0n1 00:10:04.876 15:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:04.876 15:32:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:05.133 nvme1n2 00:10:05.133 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:05.133 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:05.133 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:05.133 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:05.133 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:05.699 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:05.699 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:05.699 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:05.699 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:05.699 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d176ccf5-f1e1-4cc4-97af-33f5014cb4cb == \d\1\7\6\c\c\f\5\-\f\1\e\1\-\4\c\c\4\-\9\7\a\f\-\3\3\f\5\0\1\4\c\b\4\c\b ]] 00:10:05.699 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:05.699 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:05.699 15:33:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 25a18393-3b28-4ed9-a262-31976fdc56ea == \2\5\a\1\8\3\9\3\-\3\b\2\8\-\4\e\d\9\-\a\2\6\2\-\3\1\9\7\6\f\d\c\5\6\e\a ]] 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72207 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72207 ']' 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72207 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72207 00:10:06.265 killing process with pid 72207 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72207' 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72207 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72207 00:10:06.265 15:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.835 rmmod nvme_tcp 00:10:06.835 rmmod nvme_fabrics 00:10:06.835 rmmod nvme_keyring 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 71819 ']' 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 71819 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 71819 ']' 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 71819 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71819 00:10:06.835 killing process with pid 71819 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71819' 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 71819 00:10:06.835 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 71819 00:10:07.094 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.094 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.094 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.094 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.094 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.094 15:33:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.094 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.094 15:33:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.094 15:33:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:07.094 00:10:07.094 real 0m17.540s 00:10:07.094 user 0m27.769s 00:10:07.094 sys 0m2.562s 00:10:07.094 ************************************ 00:10:07.094 END TEST nvmf_ns_masking 00:10:07.094 ************************************ 00:10:07.094 15:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.094 15:33:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:07.094 15:33:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:07.094 15:33:02 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:10:07.094 15:33:02 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:10:07.094 15:33:02 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:07.094 15:33:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:07.094 15:33:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.094 15:33:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.094 ************************************ 00:10:07.094 START TEST nvmf_host_management 00:10:07.094 ************************************ 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:07.094 * Looking for test storage... 00:10:07.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:07.094 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:07.095 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:07.095 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:07.095 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:07.095 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:07.095 Cannot find device "nvmf_tgt_br" 00:10:07.352 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:10:07.352 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.352 Cannot find device "nvmf_tgt_br2" 00:10:07.352 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:10:07.352 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:07.352 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:07.352 Cannot find device "nvmf_tgt_br" 00:10:07.352 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:10:07.352 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:07.352 Cannot find device "nvmf_tgt_br2" 00:10:07.352 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:07.353 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:07.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:10:07.611 00:10:07.611 --- 10.0.0.2 ping statistics --- 00:10:07.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.611 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:07.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:07.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:10:07.611 00:10:07.611 --- 10.0.0.3 ping statistics --- 00:10:07.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.611 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:07.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:07.611 00:10:07.611 --- 10.0.0.1 ping statistics --- 00:10:07.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.611 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72552 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72552 00:10:07.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72552 ']' 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.611 15:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:07.611 [2024-07-15 15:33:02.654718] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:07.611 [2024-07-15 15:33:02.655045] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.870 [2024-07-15 15:33:02.797642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.870 [2024-07-15 15:33:02.871353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.870 [2024-07-15 15:33:02.871630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.870 [2024-07-15 15:33:02.871830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.870 [2024-07-15 15:33:02.871849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.870 [2024-07-15 15:33:02.871858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.870 [2024-07-15 15:33:02.872015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.870 [2024-07-15 15:33:02.873240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.870 [2024-07-15 15:33:02.873331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:07.870 [2024-07-15 15:33:02.873342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 [2024-07-15 15:33:03.727898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 Malloc0 00:10:08.802 [2024-07-15 15:33:03.791970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72624 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72624 /var/tmp/bdevperf.sock 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72624 ']' 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:08.802 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:08.802 { 00:10:08.802 "params": { 00:10:08.802 "name": "Nvme$subsystem", 00:10:08.802 "trtype": "$TEST_TRANSPORT", 00:10:08.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.803 "adrfam": "ipv4", 00:10:08.803 "trsvcid": "$NVMF_PORT", 00:10:08.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.803 "hdgst": ${hdgst:-false}, 00:10:08.803 "ddgst": ${ddgst:-false} 00:10:08.803 }, 00:10:08.803 "method": "bdev_nvme_attach_controller" 00:10:08.803 } 00:10:08.803 EOF 00:10:08.803 )") 00:10:08.803 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:08.803 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:08.803 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:08.803 15:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:08.803 "params": { 00:10:08.803 "name": "Nvme0", 00:10:08.803 "trtype": "tcp", 00:10:08.803 "traddr": "10.0.0.2", 00:10:08.803 "adrfam": "ipv4", 00:10:08.803 "trsvcid": "4420", 00:10:08.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:08.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:08.803 "hdgst": false, 00:10:08.803 "ddgst": false 00:10:08.803 }, 00:10:08.803 "method": "bdev_nvme_attach_controller" 00:10:08.803 }' 00:10:08.803 [2024-07-15 15:33:03.892272] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:08.803 [2024-07-15 15:33:03.892362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72624 ] 00:10:09.061 [2024-07-15 15:33:04.031814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.061 [2024-07-15 15:33:04.100713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.320 Running I/O for 10 seconds... 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 [2024-07-15 15:33:04.965104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53310 is same with the state(5) to be set 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 [2024-07-15 15:33:04.972491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.888 [2024-07-15 15:33:04.972705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.972726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.888 [2024-07-15 15:33:04.972737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.972747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.888 [2024-07-15 15:33:04.972756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.972767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.888 [2024-07-15 15:33:04.972776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.972786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a9af0 is same with the state(5) to be set 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.888 15:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:09.888 [2024-07-15 15:33:04.982001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.888 [2024-07-15 15:33:04.982562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.888 [2024-07-15 15:33:04.982582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.982989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.982999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.889 [2024-07-15 15:33:04.983499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.889 [2024-07-15 15:33:04.983509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.890 [2024-07-15 15:33:04.983761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.890 [2024-07-15 15:33:04.983823] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16a9820 was disconnected and freed. reset controller. 00:10:09.890 [2024-07-15 15:33:04.983856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a9af0 (9): Bad file descriptor 00:10:09.890 [2024-07-15 15:33:04.985046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:09.890 task offset: 8192 on job bdev=Nvme0n1 fails 00:10:09.890 00:10:09.890 Latency(us) 00:10:09.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.890 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:09.890 Job: Nvme0n1 ended in about 0.74 seconds with error 00:10:09.890 Verification LBA range: start 0x0 length 0x400 00:10:09.890 Nvme0n1 : 0.74 1474.34 92.15 86.73 0.00 39880.64 2263.97 41228.10 00:10:09.890 =================================================================================================================== 00:10:09.890 Total : 1474.34 92.15 86.73 0.00 39880.64 2263.97 41228.10 00:10:09.890 [2024-07-15 15:33:04.987090] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:09.890 [2024-07-15 15:33:04.994448] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72624 00:10:11.268 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72624) - No such process 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:11.268 { 00:10:11.268 "params": { 00:10:11.268 "name": "Nvme$subsystem", 00:10:11.268 "trtype": "$TEST_TRANSPORT", 00:10:11.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.268 "adrfam": "ipv4", 00:10:11.268 "trsvcid": "$NVMF_PORT", 00:10:11.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.268 "hdgst": ${hdgst:-false}, 00:10:11.268 "ddgst": ${ddgst:-false} 00:10:11.268 }, 00:10:11.268 "method": "bdev_nvme_attach_controller" 00:10:11.268 } 00:10:11.268 EOF 00:10:11.268 )") 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:11.268 15:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:11.268 "params": { 00:10:11.268 "name": "Nvme0", 00:10:11.268 "trtype": "tcp", 00:10:11.268 "traddr": "10.0.0.2", 00:10:11.268 "adrfam": "ipv4", 00:10:11.268 "trsvcid": "4420", 00:10:11.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:11.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:11.268 "hdgst": false, 00:10:11.268 "ddgst": false 00:10:11.268 }, 00:10:11.268 "method": "bdev_nvme_attach_controller" 00:10:11.268 }' 00:10:11.268 [2024-07-15 15:33:06.044301] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:11.268 [2024-07-15 15:33:06.044391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72674 ] 00:10:11.268 [2024-07-15 15:33:06.179711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.268 [2024-07-15 15:33:06.236518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.268 Running I/O for 1 seconds... 00:10:12.646 00:10:12.646 Latency(us) 00:10:12.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.646 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:12.646 Verification LBA range: start 0x0 length 0x400 00:10:12.646 Nvme0n1 : 1.00 1597.60 99.85 0.00 0.00 39217.72 5242.88 36938.47 00:10:12.646 =================================================================================================================== 00:10:12.646 Total : 1597.60 99.85 0.00 0.00 39217.72 5242.88 36938.47 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:12.646 rmmod nvme_tcp 00:10:12.646 rmmod nvme_fabrics 00:10:12.646 rmmod nvme_keyring 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72552 ']' 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72552 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72552 ']' 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72552 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72552 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72552' 00:10:12.646 killing process with pid 72552 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72552 00:10:12.646 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72552 00:10:12.906 [2024-07-15 15:33:07.817069] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:12.906 ************************************ 00:10:12.906 END TEST nvmf_host_management 00:10:12.906 ************************************ 00:10:12.906 00:10:12.906 real 0m5.806s 00:10:12.906 user 0m22.625s 00:10:12.906 sys 0m1.237s 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.906 15:33:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.906 15:33:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:12.906 15:33:07 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:12.906 15:33:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:12.906 15:33:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.906 15:33:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.906 ************************************ 00:10:12.906 START TEST nvmf_lvol 00:10:12.906 ************************************ 00:10:12.906 15:33:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:12.906 * Looking for test storage... 00:10:12.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.906 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:13.166 Cannot find device "nvmf_tgt_br" 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.166 Cannot find device "nvmf_tgt_br2" 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:13.166 Cannot find device "nvmf_tgt_br" 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:13.166 Cannot find device "nvmf_tgt_br2" 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:13.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:13.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:13.166 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:13.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:10:13.426 00:10:13.426 --- 10.0.0.2 ping statistics --- 00:10:13.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.426 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:13.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:13.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:10:13.426 00:10:13.426 --- 10.0.0.3 ping statistics --- 00:10:13.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.426 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:13.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:13.426 00:10:13.426 --- 10.0.0.1 ping statistics --- 00:10:13.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.426 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=72887 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 72887 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 72887 ']' 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.426 15:33:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.426 [2024-07-15 15:33:08.472040] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:13.426 [2024-07-15 15:33:08.472143] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.685 [2024-07-15 15:33:08.609586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:13.685 [2024-07-15 15:33:08.680105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.685 [2024-07-15 15:33:08.680169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.685 [2024-07-15 15:33:08.680183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.685 [2024-07-15 15:33:08.680192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.685 [2024-07-15 15:33:08.680201] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.685 [2024-07-15 15:33:08.680572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.685 [2024-07-15 15:33:08.680674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.685 [2024-07-15 15:33:08.680680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.622 15:33:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.622 15:33:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:10:14.622 15:33:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.622 15:33:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:14.622 15:33:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:14.622 15:33:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.622 15:33:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:14.881 [2024-07-15 15:33:09.772612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.881 15:33:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.140 15:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:15.140 15:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.399 15:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:15.399 15:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:15.658 15:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:15.917 15:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=566846f2-5415-4d9a-9f3d-7246b02aa612 00:10:15.917 15:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 566846f2-5415-4d9a-9f3d-7246b02aa612 lvol 20 00:10:16.176 15:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=62d65320-5104-4f4c-a4b2-dc87ccb80693 00:10:16.176 15:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:16.434 15:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 62d65320-5104-4f4c-a4b2-dc87ccb80693 00:10:16.709 15:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:16.990 [2024-07-15 15:33:12.011477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.990 15:33:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.248 15:33:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73035 00:10:17.248 15:33:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:17.248 15:33:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:18.621 15:33:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 62d65320-5104-4f4c-a4b2-dc87ccb80693 MY_SNAPSHOT 00:10:18.621 15:33:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d1a1bea8-34fb-4e8f-8743-0c10bd031291 00:10:18.621 15:33:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 62d65320-5104-4f4c-a4b2-dc87ccb80693 30 00:10:18.879 15:33:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d1a1bea8-34fb-4e8f-8743-0c10bd031291 MY_CLONE 00:10:19.445 15:33:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9ccfb96c-1ff2-4d4d-9704-76a864595982 00:10:19.445 15:33:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9ccfb96c-1ff2-4d4d-9704-76a864595982 00:10:20.011 15:33:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73035 00:10:28.122 Initializing NVMe Controllers 00:10:28.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:28.122 Controller IO queue size 128, less than required. 00:10:28.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:28.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:28.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:28.122 Initialization complete. Launching workers. 00:10:28.122 ======================================================== 00:10:28.122 Latency(us) 00:10:28.122 Device Information : IOPS MiB/s Average min max 00:10:28.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10552.30 41.22 12139.23 2770.34 55786.82 00:10:28.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10635.10 41.54 12039.23 3765.78 68357.01 00:10:28.122 ======================================================== 00:10:28.122 Total : 21187.40 82.76 12089.04 2770.34 68357.01 00:10:28.122 00:10:28.122 15:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:28.122 15:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 62d65320-5104-4f4c-a4b2-dc87ccb80693 00:10:28.122 15:33:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 566846f2-5415-4d9a-9f3d-7246b02aa612 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:28.379 rmmod nvme_tcp 00:10:28.379 rmmod nvme_fabrics 00:10:28.379 rmmod nvme_keyring 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 72887 ']' 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 72887 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 72887 ']' 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 72887 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72887 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72887' 00:10:28.379 killing process with pid 72887 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 72887 00:10:28.379 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 72887 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:28.638 ************************************ 00:10:28.638 END TEST nvmf_lvol 00:10:28.638 ************************************ 00:10:28.638 00:10:28.638 real 0m15.797s 00:10:28.638 user 1m6.230s 00:10:28.638 sys 0m3.687s 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.638 15:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:28.897 15:33:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:28.897 15:33:23 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:28.897 15:33:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:28.897 15:33:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.897 15:33:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.897 ************************************ 00:10:28.897 START TEST nvmf_lvs_grow 00:10:28.897 ************************************ 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:28.897 * Looking for test storage... 00:10:28.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.897 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:28.898 Cannot find device "nvmf_tgt_br" 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:28.898 Cannot find device "nvmf_tgt_br2" 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:28.898 Cannot find device "nvmf_tgt_br" 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:28.898 Cannot find device "nvmf_tgt_br2" 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:10:28.898 15:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:28.898 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:29.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:29.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:29.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:10:29.156 00:10:29.156 --- 10.0.0.2 ping statistics --- 00:10:29.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.156 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:29.156 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:29.156 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:10:29.156 00:10:29.156 --- 10.0.0.3 ping statistics --- 00:10:29.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.156 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:29.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:29.156 00:10:29.156 --- 10.0.0.1 ping statistics --- 00:10:29.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.156 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.156 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73397 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73397 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73397 ']' 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.157 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:29.415 [2024-07-15 15:33:24.337200] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:29.415 [2024-07-15 15:33:24.337316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.415 [2024-07-15 15:33:24.476574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.415 [2024-07-15 15:33:24.532071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.415 [2024-07-15 15:33:24.532147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.415 [2024-07-15 15:33:24.532157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.415 [2024-07-15 15:33:24.532164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.415 [2024-07-15 15:33:24.532171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.415 [2024-07-15 15:33:24.532194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.673 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.673 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:10:29.673 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:29.673 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:29.673 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:29.673 15:33:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.673 15:33:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:29.931 [2024-07-15 15:33:24.915203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:29.931 ************************************ 00:10:29.931 START TEST lvs_grow_clean 00:10:29.931 ************************************ 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:29.931 15:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:30.189 15:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:30.189 15:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:30.448 15:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:30.448 15:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:30.448 15:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:30.705 15:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:30.705 15:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:30.705 15:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 lvol 150 00:10:31.270 15:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7951ed17-4af5-4da8-bfde-c80ccabf078e 00:10:31.270 15:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:31.270 15:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:31.270 [2024-07-15 15:33:26.358435] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:31.270 [2024-07-15 15:33:26.358543] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:31.270 true 00:10:31.270 15:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:31.270 15:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:31.528 15:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:31.528 15:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:32.092 15:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7951ed17-4af5-4da8-bfde-c80ccabf078e 00:10:32.092 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:32.350 [2024-07-15 15:33:27.363063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.350 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:32.607 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73545 00:10:32.607 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:32.607 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:32.607 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73545 /var/tmp/bdevperf.sock 00:10:32.607 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73545 ']' 00:10:32.607 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:32.608 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:32.608 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:32.608 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.608 15:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 [2024-07-15 15:33:27.669274] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:32.608 [2024-07-15 15:33:27.669395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73545 ] 00:10:32.866 [2024-07-15 15:33:27.803041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.866 [2024-07-15 15:33:27.860733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.801 15:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.801 15:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:10:33.801 15:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:33.801 Nvme0n1 00:10:33.801 15:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:34.060 [ 00:10:34.060 { 00:10:34.060 "aliases": [ 00:10:34.060 "7951ed17-4af5-4da8-bfde-c80ccabf078e" 00:10:34.060 ], 00:10:34.060 "assigned_rate_limits": { 00:10:34.060 "r_mbytes_per_sec": 0, 00:10:34.060 "rw_ios_per_sec": 0, 00:10:34.060 "rw_mbytes_per_sec": 0, 00:10:34.060 "w_mbytes_per_sec": 0 00:10:34.060 }, 00:10:34.060 "block_size": 4096, 00:10:34.060 "claimed": false, 00:10:34.060 "driver_specific": { 00:10:34.060 "mp_policy": "active_passive", 00:10:34.060 "nvme": [ 00:10:34.060 { 00:10:34.060 "ctrlr_data": { 00:10:34.060 "ana_reporting": false, 00:10:34.060 "cntlid": 1, 00:10:34.060 "firmware_revision": "24.09", 00:10:34.060 "model_number": "SPDK bdev Controller", 00:10:34.060 "multi_ctrlr": true, 00:10:34.060 "oacs": { 00:10:34.060 "firmware": 0, 00:10:34.060 "format": 0, 00:10:34.060 "ns_manage": 0, 00:10:34.060 "security": 0 00:10:34.060 }, 00:10:34.060 "serial_number": "SPDK0", 00:10:34.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:34.060 "vendor_id": "0x8086" 00:10:34.060 }, 00:10:34.060 "ns_data": { 00:10:34.060 "can_share": true, 00:10:34.060 "id": 1 00:10:34.060 }, 00:10:34.060 "trid": { 00:10:34.060 "adrfam": "IPv4", 00:10:34.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:34.060 "traddr": "10.0.0.2", 00:10:34.060 "trsvcid": "4420", 00:10:34.060 "trtype": "TCP" 00:10:34.060 }, 00:10:34.060 "vs": { 00:10:34.060 "nvme_version": "1.3" 00:10:34.060 } 00:10:34.060 } 00:10:34.060 ] 00:10:34.060 }, 00:10:34.060 "memory_domains": [ 00:10:34.060 { 00:10:34.060 "dma_device_id": "system", 00:10:34.060 "dma_device_type": 1 00:10:34.060 } 00:10:34.060 ], 00:10:34.060 "name": "Nvme0n1", 00:10:34.060 "num_blocks": 38912, 00:10:34.060 "product_name": "NVMe disk", 00:10:34.060 "supported_io_types": { 00:10:34.060 "abort": true, 00:10:34.060 "compare": true, 00:10:34.060 "compare_and_write": true, 00:10:34.060 "copy": true, 00:10:34.060 "flush": true, 00:10:34.060 "get_zone_info": false, 00:10:34.060 "nvme_admin": true, 00:10:34.060 "nvme_io": true, 00:10:34.060 "nvme_io_md": false, 00:10:34.060 "nvme_iov_md": false, 00:10:34.060 "read": true, 00:10:34.060 "reset": true, 00:10:34.060 "seek_data": false, 00:10:34.060 "seek_hole": false, 00:10:34.060 "unmap": true, 00:10:34.060 "write": true, 00:10:34.060 "write_zeroes": true, 00:10:34.060 "zcopy": false, 00:10:34.060 "zone_append": false, 00:10:34.060 "zone_management": false 00:10:34.060 }, 00:10:34.060 "uuid": "7951ed17-4af5-4da8-bfde-c80ccabf078e", 00:10:34.060 "zoned": false 00:10:34.060 } 00:10:34.060 ] 00:10:34.318 15:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73598 00:10:34.318 15:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:34.318 15:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:34.318 Running I/O for 10 seconds... 00:10:35.266 Latency(us) 00:10:35.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.266 Nvme0n1 : 1.00 7574.00 29.59 0.00 0.00 0.00 0.00 0.00 00:10:35.266 =================================================================================================================== 00:10:35.266 Total : 7574.00 29.59 0.00 0.00 0.00 0.00 0.00 00:10:35.266 00:10:36.227 15:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:36.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.227 Nvme0n1 : 2.00 7577.50 29.60 0.00 0.00 0.00 0.00 0.00 00:10:36.227 =================================================================================================================== 00:10:36.227 Total : 7577.50 29.60 0.00 0.00 0.00 0.00 0.00 00:10:36.227 00:10:36.484 true 00:10:36.484 15:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:36.484 15:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:36.741 15:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:36.741 15:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:36.741 15:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73598 00:10:37.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.307 Nvme0n1 : 3.00 7620.33 29.77 0.00 0.00 0.00 0.00 0.00 00:10:37.307 =================================================================================================================== 00:10:37.307 Total : 7620.33 29.77 0.00 0.00 0.00 0.00 0.00 00:10:37.307 00:10:38.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.258 Nvme0n1 : 4.00 7588.25 29.64 0.00 0.00 0.00 0.00 0.00 00:10:38.258 =================================================================================================================== 00:10:38.258 Total : 7588.25 29.64 0.00 0.00 0.00 0.00 0.00 00:10:38.258 00:10:39.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.191 Nvme0n1 : 5.00 7536.20 29.44 0.00 0.00 0.00 0.00 0.00 00:10:39.191 =================================================================================================================== 00:10:39.191 Total : 7536.20 29.44 0.00 0.00 0.00 0.00 0.00 00:10:39.191 00:10:40.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.567 Nvme0n1 : 6.00 7500.17 29.30 0.00 0.00 0.00 0.00 0.00 00:10:40.567 =================================================================================================================== 00:10:40.567 Total : 7500.17 29.30 0.00 0.00 0.00 0.00 0.00 00:10:40.567 00:10:41.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.504 Nvme0n1 : 7.00 7445.43 29.08 0.00 0.00 0.00 0.00 0.00 00:10:41.504 =================================================================================================================== 00:10:41.504 Total : 7445.43 29.08 0.00 0.00 0.00 0.00 0.00 00:10:41.504 00:10:42.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.441 Nvme0n1 : 8.00 7411.62 28.95 0.00 0.00 0.00 0.00 0.00 00:10:42.441 =================================================================================================================== 00:10:42.441 Total : 7411.62 28.95 0.00 0.00 0.00 0.00 0.00 00:10:42.441 00:10:43.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.403 Nvme0n1 : 9.00 7405.22 28.93 0.00 0.00 0.00 0.00 0.00 00:10:43.403 =================================================================================================================== 00:10:43.403 Total : 7405.22 28.93 0.00 0.00 0.00 0.00 0.00 00:10:43.403 00:10:44.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.340 Nvme0n1 : 10.00 7369.40 28.79 0.00 0.00 0.00 0.00 0.00 00:10:44.340 =================================================================================================================== 00:10:44.340 Total : 7369.40 28.79 0.00 0.00 0.00 0.00 0.00 00:10:44.340 00:10:44.340 00:10:44.340 Latency(us) 00:10:44.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.340 Nvme0n1 : 10.02 7370.34 28.79 0.00 0.00 17360.92 7208.96 38606.66 00:10:44.340 =================================================================================================================== 00:10:44.340 Total : 7370.34 28.79 0.00 0.00 17360.92 7208.96 38606.66 00:10:44.340 0 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73545 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73545 ']' 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73545 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73545 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73545' 00:10:44.340 killing process with pid 73545 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73545 00:10:44.340 Received shutdown signal, test time was about 10.000000 seconds 00:10:44.340 00:10:44.340 Latency(us) 00:10:44.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.340 =================================================================================================================== 00:10:44.340 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:44.340 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73545 00:10:44.598 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:44.857 15:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:45.116 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:45.116 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:45.374 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:45.374 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:45.374 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:45.633 [2024-07-15 15:33:40.571246] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:45.633 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:45.892 2024/07/15 15:33:40 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:59d1170a-63c5-4fed-be29-ec4318bc26f1], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:45.892 request: 00:10:45.892 { 00:10:45.892 "method": "bdev_lvol_get_lvstores", 00:10:45.892 "params": { 00:10:45.892 "uuid": "59d1170a-63c5-4fed-be29-ec4318bc26f1" 00:10:45.892 } 00:10:45.892 } 00:10:45.892 Got JSON-RPC error response 00:10:45.892 GoRPCClient: error on JSON-RPC call 00:10:45.892 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:45.892 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:45.892 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:45.892 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:45.892 15:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:46.151 aio_bdev 00:10:46.151 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7951ed17-4af5-4da8-bfde-c80ccabf078e 00:10:46.151 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=7951ed17-4af5-4da8-bfde-c80ccabf078e 00:10:46.151 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:46.151 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:46.151 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:46.151 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:46.151 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:46.410 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7951ed17-4af5-4da8-bfde-c80ccabf078e -t 2000 00:10:46.669 [ 00:10:46.669 { 00:10:46.669 "aliases": [ 00:10:46.669 "lvs/lvol" 00:10:46.669 ], 00:10:46.669 "assigned_rate_limits": { 00:10:46.669 "r_mbytes_per_sec": 0, 00:10:46.669 "rw_ios_per_sec": 0, 00:10:46.669 "rw_mbytes_per_sec": 0, 00:10:46.669 "w_mbytes_per_sec": 0 00:10:46.669 }, 00:10:46.669 "block_size": 4096, 00:10:46.669 "claimed": false, 00:10:46.669 "driver_specific": { 00:10:46.669 "lvol": { 00:10:46.669 "base_bdev": "aio_bdev", 00:10:46.669 "clone": false, 00:10:46.669 "esnap_clone": false, 00:10:46.669 "lvol_store_uuid": "59d1170a-63c5-4fed-be29-ec4318bc26f1", 00:10:46.669 "num_allocated_clusters": 38, 00:10:46.669 "snapshot": false, 00:10:46.669 "thin_provision": false 00:10:46.669 } 00:10:46.669 }, 00:10:46.669 "name": "7951ed17-4af5-4da8-bfde-c80ccabf078e", 00:10:46.669 "num_blocks": 38912, 00:10:46.669 "product_name": "Logical Volume", 00:10:46.669 "supported_io_types": { 00:10:46.669 "abort": false, 00:10:46.669 "compare": false, 00:10:46.669 "compare_and_write": false, 00:10:46.669 "copy": false, 00:10:46.669 "flush": false, 00:10:46.669 "get_zone_info": false, 00:10:46.669 "nvme_admin": false, 00:10:46.669 "nvme_io": false, 00:10:46.669 "nvme_io_md": false, 00:10:46.669 "nvme_iov_md": false, 00:10:46.669 "read": true, 00:10:46.669 "reset": true, 00:10:46.669 "seek_data": true, 00:10:46.669 "seek_hole": true, 00:10:46.669 "unmap": true, 00:10:46.669 "write": true, 00:10:46.669 "write_zeroes": true, 00:10:46.669 "zcopy": false, 00:10:46.669 "zone_append": false, 00:10:46.669 "zone_management": false 00:10:46.669 }, 00:10:46.669 "uuid": "7951ed17-4af5-4da8-bfde-c80ccabf078e", 00:10:46.669 "zoned": false 00:10:46.669 } 00:10:46.669 ] 00:10:46.669 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:46.669 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:46.669 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:46.669 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:46.669 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:46.669 15:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:46.928 15:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:46.928 15:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7951ed17-4af5-4da8-bfde-c80ccabf078e 00:10:47.188 15:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59d1170a-63c5-4fed-be29-ec4318bc26f1 00:10:47.448 15:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:47.707 15:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:48.289 00:10:48.289 real 0m18.326s 00:10:48.289 user 0m17.682s 00:10:48.289 sys 0m2.116s 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.289 ************************************ 00:10:48.289 END TEST lvs_grow_clean 00:10:48.289 ************************************ 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:48.289 ************************************ 00:10:48.289 START TEST lvs_grow_dirty 00:10:48.289 ************************************ 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:48.289 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:48.560 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:48.560 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:48.819 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:10:48.819 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:10:48.819 15:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:49.077 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:49.077 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:49.077 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d lvol 150 00:10:49.336 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=224753fe-7eaf-424e-b1e6-ff1faaf73545 00:10:49.336 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:49.336 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:49.594 [2024-07-15 15:33:44.532268] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:49.594 [2024-07-15 15:33:44.532353] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:49.594 true 00:10:49.594 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:49.594 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:10:49.852 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:49.852 15:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:50.109 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 224753fe-7eaf-424e-b1e6-ff1faaf73545 00:10:50.367 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:50.626 [2024-07-15 15:33:45.600821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.626 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73991 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73991 /var/tmp/bdevperf.sock 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 73991 ']' 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.884 15:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:50.884 [2024-07-15 15:33:45.922119] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:50.884 [2024-07-15 15:33:45.922276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73991 ] 00:10:51.142 [2024-07-15 15:33:46.072571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.143 [2024-07-15 15:33:46.132635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.076 15:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.076 15:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:52.076 15:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:52.076 Nvme0n1 00:10:52.334 15:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:52.334 [ 00:10:52.334 { 00:10:52.334 "aliases": [ 00:10:52.334 "224753fe-7eaf-424e-b1e6-ff1faaf73545" 00:10:52.334 ], 00:10:52.334 "assigned_rate_limits": { 00:10:52.334 "r_mbytes_per_sec": 0, 00:10:52.334 "rw_ios_per_sec": 0, 00:10:52.334 "rw_mbytes_per_sec": 0, 00:10:52.334 "w_mbytes_per_sec": 0 00:10:52.334 }, 00:10:52.334 "block_size": 4096, 00:10:52.334 "claimed": false, 00:10:52.334 "driver_specific": { 00:10:52.334 "mp_policy": "active_passive", 00:10:52.334 "nvme": [ 00:10:52.334 { 00:10:52.334 "ctrlr_data": { 00:10:52.334 "ana_reporting": false, 00:10:52.334 "cntlid": 1, 00:10:52.334 "firmware_revision": "24.09", 00:10:52.334 "model_number": "SPDK bdev Controller", 00:10:52.334 "multi_ctrlr": true, 00:10:52.334 "oacs": { 00:10:52.334 "firmware": 0, 00:10:52.334 "format": 0, 00:10:52.334 "ns_manage": 0, 00:10:52.334 "security": 0 00:10:52.334 }, 00:10:52.334 "serial_number": "SPDK0", 00:10:52.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:52.334 "vendor_id": "0x8086" 00:10:52.334 }, 00:10:52.334 "ns_data": { 00:10:52.334 "can_share": true, 00:10:52.334 "id": 1 00:10:52.334 }, 00:10:52.334 "trid": { 00:10:52.334 "adrfam": "IPv4", 00:10:52.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:52.334 "traddr": "10.0.0.2", 00:10:52.334 "trsvcid": "4420", 00:10:52.334 "trtype": "TCP" 00:10:52.334 }, 00:10:52.334 "vs": { 00:10:52.334 "nvme_version": "1.3" 00:10:52.334 } 00:10:52.334 } 00:10:52.334 ] 00:10:52.334 }, 00:10:52.334 "memory_domains": [ 00:10:52.334 { 00:10:52.334 "dma_device_id": "system", 00:10:52.334 "dma_device_type": 1 00:10:52.334 } 00:10:52.334 ], 00:10:52.334 "name": "Nvme0n1", 00:10:52.334 "num_blocks": 38912, 00:10:52.334 "product_name": "NVMe disk", 00:10:52.334 "supported_io_types": { 00:10:52.334 "abort": true, 00:10:52.334 "compare": true, 00:10:52.334 "compare_and_write": true, 00:10:52.334 "copy": true, 00:10:52.334 "flush": true, 00:10:52.334 "get_zone_info": false, 00:10:52.334 "nvme_admin": true, 00:10:52.334 "nvme_io": true, 00:10:52.334 "nvme_io_md": false, 00:10:52.334 "nvme_iov_md": false, 00:10:52.334 "read": true, 00:10:52.334 "reset": true, 00:10:52.334 "seek_data": false, 00:10:52.334 "seek_hole": false, 00:10:52.334 "unmap": true, 00:10:52.334 "write": true, 00:10:52.334 "write_zeroes": true, 00:10:52.334 "zcopy": false, 00:10:52.334 "zone_append": false, 00:10:52.334 "zone_management": false 00:10:52.334 }, 00:10:52.334 "uuid": "224753fe-7eaf-424e-b1e6-ff1faaf73545", 00:10:52.334 "zoned": false 00:10:52.334 } 00:10:52.334 ] 00:10:52.592 15:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:52.592 15:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74039 00:10:52.592 15:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:52.592 Running I/O for 10 seconds... 00:10:53.526 Latency(us) 00:10:53.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.527 Nvme0n1 : 1.00 7476.00 29.20 0.00 0.00 0.00 0.00 0.00 00:10:53.527 =================================================================================================================== 00:10:53.527 Total : 7476.00 29.20 0.00 0.00 0.00 0.00 0.00 00:10:53.527 00:10:54.461 15:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:10:54.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.461 Nvme0n1 : 2.00 7476.00 29.20 0.00 0.00 0.00 0.00 0.00 00:10:54.461 =================================================================================================================== 00:10:54.461 Total : 7476.00 29.20 0.00 0.00 0.00 0.00 0.00 00:10:54.461 00:10:54.733 true 00:10:54.733 15:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:10:54.733 15:33:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:54.992 15:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:54.992 15:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:54.992 15:33:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74039 00:10:55.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.559 Nvme0n1 : 3.00 7495.33 29.28 0.00 0.00 0.00 0.00 0.00 00:10:55.559 =================================================================================================================== 00:10:55.559 Total : 7495.33 29.28 0.00 0.00 0.00 0.00 0.00 00:10:55.559 00:10:56.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.493 Nvme0n1 : 4.00 7529.75 29.41 0.00 0.00 0.00 0.00 0.00 00:10:56.493 =================================================================================================================== 00:10:56.493 Total : 7529.75 29.41 0.00 0.00 0.00 0.00 0.00 00:10:56.493 00:10:57.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.868 Nvme0n1 : 5.00 7544.80 29.47 0.00 0.00 0.00 0.00 0.00 00:10:57.868 =================================================================================================================== 00:10:57.868 Total : 7544.80 29.47 0.00 0.00 0.00 0.00 0.00 00:10:57.868 00:10:58.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.804 Nvme0n1 : 6.00 7534.17 29.43 0.00 0.00 0.00 0.00 0.00 00:10:58.804 =================================================================================================================== 00:10:58.804 Total : 7534.17 29.43 0.00 0.00 0.00 0.00 0.00 00:10:58.804 00:10:59.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.741 Nvme0n1 : 7.00 7517.43 29.36 0.00 0.00 0.00 0.00 0.00 00:10:59.741 =================================================================================================================== 00:10:59.741 Total : 7517.43 29.36 0.00 0.00 0.00 0.00 0.00 00:10:59.741 00:11:00.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.677 Nvme0n1 : 8.00 7372.12 28.80 0.00 0.00 0.00 0.00 0.00 00:11:00.677 =================================================================================================================== 00:11:00.677 Total : 7372.12 28.80 0.00 0.00 0.00 0.00 0.00 00:11:00.677 00:11:01.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.613 Nvme0n1 : 9.00 7352.11 28.72 0.00 0.00 0.00 0.00 0.00 00:11:01.613 =================================================================================================================== 00:11:01.613 Total : 7352.11 28.72 0.00 0.00 0.00 0.00 0.00 00:11:01.613 00:11:02.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.548 Nvme0n1 : 10.00 7336.90 28.66 0.00 0.00 0.00 0.00 0.00 00:11:02.548 =================================================================================================================== 00:11:02.548 Total : 7336.90 28.66 0.00 0.00 0.00 0.00 0.00 00:11:02.548 00:11:02.548 00:11:02.548 Latency(us) 00:11:02.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.548 Nvme0n1 : 10.02 7336.21 28.66 0.00 0.00 17435.61 8043.05 156333.15 00:11:02.548 =================================================================================================================== 00:11:02.548 Total : 7336.21 28.66 0.00 0.00 17435.61 8043.05 156333.15 00:11:02.548 0 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73991 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 73991 ']' 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 73991 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73991 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:02.548 killing process with pid 73991 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73991' 00:11:02.548 Received shutdown signal, test time was about 10.000000 seconds 00:11:02.548 00:11:02.548 Latency(us) 00:11:02.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.548 =================================================================================================================== 00:11:02.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 73991 00:11:02.548 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 73991 00:11:02.807 15:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:03.065 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:03.324 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:11:03.324 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73397 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73397 00:11:03.593 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73397 Killed "${NVMF_APP[@]}" "$@" 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74203 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74203 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74203 ']' 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.593 15:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:03.593 [2024-07-15 15:33:58.684327] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:03.593 [2024-07-15 15:33:58.684426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.910 [2024-07-15 15:33:58.824398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.910 [2024-07-15 15:33:58.882034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.910 [2024-07-15 15:33:58.882095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.910 [2024-07-15 15:33:58.882105] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.910 [2024-07-15 15:33:58.882112] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.910 [2024-07-15 15:33:58.882118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.910 [2024-07-15 15:33:58.882145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.535 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.535 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:04.535 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.535 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.535 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:04.535 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.535 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:04.793 [2024-07-15 15:33:59.882935] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:04.793 [2024-07-15 15:33:59.883193] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:04.793 [2024-07-15 15:33:59.883403] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:05.051 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:05.051 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 224753fe-7eaf-424e-b1e6-ff1faaf73545 00:11:05.051 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=224753fe-7eaf-424e-b1e6-ff1faaf73545 00:11:05.051 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:05.051 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:05.051 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:05.051 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:05.051 15:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:05.051 15:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 224753fe-7eaf-424e-b1e6-ff1faaf73545 -t 2000 00:11:05.311 [ 00:11:05.311 { 00:11:05.311 "aliases": [ 00:11:05.311 "lvs/lvol" 00:11:05.311 ], 00:11:05.311 "assigned_rate_limits": { 00:11:05.311 "r_mbytes_per_sec": 0, 00:11:05.311 "rw_ios_per_sec": 0, 00:11:05.311 "rw_mbytes_per_sec": 0, 00:11:05.311 "w_mbytes_per_sec": 0 00:11:05.311 }, 00:11:05.311 "block_size": 4096, 00:11:05.311 "claimed": false, 00:11:05.311 "driver_specific": { 00:11:05.311 "lvol": { 00:11:05.311 "base_bdev": "aio_bdev", 00:11:05.311 "clone": false, 00:11:05.311 "esnap_clone": false, 00:11:05.311 "lvol_store_uuid": "24fd98b7-4307-41eb-92cd-6f1abd9b278d", 00:11:05.311 "num_allocated_clusters": 38, 00:11:05.311 "snapshot": false, 00:11:05.311 "thin_provision": false 00:11:05.311 } 00:11:05.311 }, 00:11:05.311 "name": "224753fe-7eaf-424e-b1e6-ff1faaf73545", 00:11:05.311 "num_blocks": 38912, 00:11:05.311 "product_name": "Logical Volume", 00:11:05.311 "supported_io_types": { 00:11:05.311 "abort": false, 00:11:05.311 "compare": false, 00:11:05.311 "compare_and_write": false, 00:11:05.311 "copy": false, 00:11:05.311 "flush": false, 00:11:05.311 "get_zone_info": false, 00:11:05.311 "nvme_admin": false, 00:11:05.311 "nvme_io": false, 00:11:05.311 "nvme_io_md": false, 00:11:05.311 "nvme_iov_md": false, 00:11:05.311 "read": true, 00:11:05.311 "reset": true, 00:11:05.311 "seek_data": true, 00:11:05.311 "seek_hole": true, 00:11:05.311 "unmap": true, 00:11:05.311 "write": true, 00:11:05.311 "write_zeroes": true, 00:11:05.311 "zcopy": false, 00:11:05.311 "zone_append": false, 00:11:05.311 "zone_management": false 00:11:05.311 }, 00:11:05.311 "uuid": "224753fe-7eaf-424e-b1e6-ff1faaf73545", 00:11:05.311 "zoned": false 00:11:05.311 } 00:11:05.311 ] 00:11:05.311 15:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:05.311 15:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:11:05.311 15:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:05.569 15:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:05.569 15:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:11:05.569 15:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:05.828 15:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:05.828 15:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:06.087 [2024-07-15 15:34:01.100631] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:06.087 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:11:06.346 2024/07/15 15:34:01 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:24fd98b7-4307-41eb-92cd-6f1abd9b278d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:06.346 request: 00:11:06.346 { 00:11:06.346 "method": "bdev_lvol_get_lvstores", 00:11:06.346 "params": { 00:11:06.346 "uuid": "24fd98b7-4307-41eb-92cd-6f1abd9b278d" 00:11:06.346 } 00:11:06.346 } 00:11:06.346 Got JSON-RPC error response 00:11:06.346 GoRPCClient: error on JSON-RPC call 00:11:06.346 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:06.346 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.346 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:06.346 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.346 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:06.604 aio_bdev 00:11:06.604 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 224753fe-7eaf-424e-b1e6-ff1faaf73545 00:11:06.604 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=224753fe-7eaf-424e-b1e6-ff1faaf73545 00:11:06.604 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:06.604 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:06.604 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:06.604 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:06.604 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:06.862 15:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 224753fe-7eaf-424e-b1e6-ff1faaf73545 -t 2000 00:11:07.121 [ 00:11:07.121 { 00:11:07.121 "aliases": [ 00:11:07.121 "lvs/lvol" 00:11:07.121 ], 00:11:07.121 "assigned_rate_limits": { 00:11:07.121 "r_mbytes_per_sec": 0, 00:11:07.121 "rw_ios_per_sec": 0, 00:11:07.121 "rw_mbytes_per_sec": 0, 00:11:07.121 "w_mbytes_per_sec": 0 00:11:07.121 }, 00:11:07.121 "block_size": 4096, 00:11:07.121 "claimed": false, 00:11:07.121 "driver_specific": { 00:11:07.121 "lvol": { 00:11:07.121 "base_bdev": "aio_bdev", 00:11:07.121 "clone": false, 00:11:07.121 "esnap_clone": false, 00:11:07.121 "lvol_store_uuid": "24fd98b7-4307-41eb-92cd-6f1abd9b278d", 00:11:07.121 "num_allocated_clusters": 38, 00:11:07.121 "snapshot": false, 00:11:07.121 "thin_provision": false 00:11:07.121 } 00:11:07.121 }, 00:11:07.121 "name": "224753fe-7eaf-424e-b1e6-ff1faaf73545", 00:11:07.121 "num_blocks": 38912, 00:11:07.121 "product_name": "Logical Volume", 00:11:07.121 "supported_io_types": { 00:11:07.121 "abort": false, 00:11:07.121 "compare": false, 00:11:07.121 "compare_and_write": false, 00:11:07.121 "copy": false, 00:11:07.121 "flush": false, 00:11:07.121 "get_zone_info": false, 00:11:07.121 "nvme_admin": false, 00:11:07.121 "nvme_io": false, 00:11:07.121 "nvme_io_md": false, 00:11:07.121 "nvme_iov_md": false, 00:11:07.121 "read": true, 00:11:07.121 "reset": true, 00:11:07.121 "seek_data": true, 00:11:07.121 "seek_hole": true, 00:11:07.121 "unmap": true, 00:11:07.121 "write": true, 00:11:07.121 "write_zeroes": true, 00:11:07.121 "zcopy": false, 00:11:07.121 "zone_append": false, 00:11:07.121 "zone_management": false 00:11:07.121 }, 00:11:07.121 "uuid": "224753fe-7eaf-424e-b1e6-ff1faaf73545", 00:11:07.121 "zoned": false 00:11:07.121 } 00:11:07.121 ] 00:11:07.121 15:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:07.121 15:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:11:07.121 15:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:07.379 15:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:07.379 15:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:11:07.379 15:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:07.638 15:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:07.638 15:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 224753fe-7eaf-424e-b1e6-ff1faaf73545 00:11:07.896 15:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24fd98b7-4307-41eb-92cd-6f1abd9b278d 00:11:08.154 15:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:08.412 15:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:08.670 00:11:08.670 real 0m20.353s 00:11:08.670 user 0m40.889s 00:11:08.670 sys 0m9.193s 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:08.670 ************************************ 00:11:08.670 END TEST lvs_grow_dirty 00:11:08.670 ************************************ 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:08.670 nvmf_trace.0 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.670 15:34:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:08.927 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.927 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:08.927 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.927 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.927 rmmod nvme_tcp 00:11:08.927 rmmod nvme_fabrics 00:11:08.927 rmmod nvme_keyring 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74203 ']' 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74203 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74203 ']' 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74203 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74203 00:11:09.186 killing process with pid 74203 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74203' 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74203 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74203 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:09.186 ************************************ 00:11:09.186 END TEST nvmf_lvs_grow 00:11:09.186 ************************************ 00:11:09.186 00:11:09.186 real 0m40.509s 00:11:09.186 user 1m4.663s 00:11:09.186 sys 0m12.006s 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.186 15:34:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:09.444 15:34:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:09.444 15:34:04 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:09.444 15:34:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.444 15:34:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.444 15:34:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.444 ************************************ 00:11:09.444 START TEST nvmf_bdev_io_wait 00:11:09.444 ************************************ 00:11:09.444 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:09.444 * Looking for test storage... 00:11:09.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:09.444 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.444 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:09.444 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:09.445 Cannot find device "nvmf_tgt_br" 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.445 Cannot find device "nvmf_tgt_br2" 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:09.445 Cannot find device "nvmf_tgt_br" 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:09.445 Cannot find device "nvmf_tgt_br2" 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:11:09.445 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:09.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:09.703 00:11:09.703 --- 10.0.0.2 ping statistics --- 00:11:09.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.703 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:09.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:09.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:11:09.703 00:11:09.703 --- 10.0.0.3 ping statistics --- 00:11:09.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.703 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:09.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:09.703 00:11:09.703 --- 10.0.0.1 ping statistics --- 00:11:09.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.703 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:09.703 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.704 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74618 00:11:09.704 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:09.704 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74618 00:11:09.704 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74618 ']' 00:11:09.704 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.704 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.704 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.704 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.704 15:34:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.961 [2024-07-15 15:34:04.856214] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:09.961 [2024-07-15 15:34:04.856316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.961 [2024-07-15 15:34:04.995592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.961 [2024-07-15 15:34:05.052285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.961 [2024-07-15 15:34:05.052595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.961 [2024-07-15 15:34:05.052741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.961 [2024-07-15 15:34:05.052860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.961 [2024-07-15 15:34:05.052900] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.961 [2024-07-15 15:34:05.053106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.961 [2024-07-15 15:34:05.053240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.961 [2024-07-15 15:34:05.053415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.961 [2024-07-15 15:34:05.053326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.961 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.961 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:11:09.961 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.961 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.961 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.220 [2024-07-15 15:34:05.176812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.220 Malloc0 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.220 [2024-07-15 15:34:05.219395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74656 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74658 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:10.220 { 00:11:10.220 "params": { 00:11:10.220 "name": "Nvme$subsystem", 00:11:10.220 "trtype": "$TEST_TRANSPORT", 00:11:10.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.220 "adrfam": "ipv4", 00:11:10.220 "trsvcid": "$NVMF_PORT", 00:11:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.220 "hdgst": ${hdgst:-false}, 00:11:10.220 "ddgst": ${ddgst:-false} 00:11:10.220 }, 00:11:10.220 "method": "bdev_nvme_attach_controller" 00:11:10.220 } 00:11:10.220 EOF 00:11:10.220 )") 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74660 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74662 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:10.220 { 00:11:10.220 "params": { 00:11:10.220 "name": "Nvme$subsystem", 00:11:10.220 "trtype": "$TEST_TRANSPORT", 00:11:10.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.220 "adrfam": "ipv4", 00:11:10.220 "trsvcid": "$NVMF_PORT", 00:11:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.220 "hdgst": ${hdgst:-false}, 00:11:10.220 "ddgst": ${ddgst:-false} 00:11:10.220 }, 00:11:10.220 "method": "bdev_nvme_attach_controller" 00:11:10.220 } 00:11:10.220 EOF 00:11:10.220 )") 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:10.220 { 00:11:10.220 "params": { 00:11:10.220 "name": "Nvme$subsystem", 00:11:10.220 "trtype": "$TEST_TRANSPORT", 00:11:10.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.220 "adrfam": "ipv4", 00:11:10.220 "trsvcid": "$NVMF_PORT", 00:11:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.220 "hdgst": ${hdgst:-false}, 00:11:10.220 "ddgst": ${ddgst:-false} 00:11:10.220 }, 00:11:10.220 "method": "bdev_nvme_attach_controller" 00:11:10.220 } 00:11:10.220 EOF 00:11:10.220 )") 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:10.220 { 00:11:10.220 "params": { 00:11:10.220 "name": "Nvme$subsystem", 00:11:10.220 "trtype": "$TEST_TRANSPORT", 00:11:10.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.220 "adrfam": "ipv4", 00:11:10.220 "trsvcid": "$NVMF_PORT", 00:11:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.220 "hdgst": ${hdgst:-false}, 00:11:10.220 "ddgst": ${ddgst:-false} 00:11:10.220 }, 00:11:10.220 "method": "bdev_nvme_attach_controller" 00:11:10.220 } 00:11:10.220 EOF 00:11:10.220 )") 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:10.220 "params": { 00:11:10.220 "name": "Nvme1", 00:11:10.220 "trtype": "tcp", 00:11:10.220 "traddr": "10.0.0.2", 00:11:10.220 "adrfam": "ipv4", 00:11:10.220 "trsvcid": "4420", 00:11:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.220 "hdgst": false, 00:11:10.220 "ddgst": false 00:11:10.220 }, 00:11:10.220 "method": "bdev_nvme_attach_controller" 00:11:10.220 }' 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:10.220 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:10.220 "params": { 00:11:10.220 "name": "Nvme1", 00:11:10.220 "trtype": "tcp", 00:11:10.220 "traddr": "10.0.0.2", 00:11:10.220 "adrfam": "ipv4", 00:11:10.220 "trsvcid": "4420", 00:11:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.220 "hdgst": false, 00:11:10.220 "ddgst": false 00:11:10.220 }, 00:11:10.221 "method": "bdev_nvme_attach_controller" 00:11:10.221 }' 00:11:10.221 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:10.221 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:10.221 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:10.221 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:10.221 "params": { 00:11:10.221 "name": "Nvme1", 00:11:10.221 "trtype": "tcp", 00:11:10.221 "traddr": "10.0.0.2", 00:11:10.221 "adrfam": "ipv4", 00:11:10.221 "trsvcid": "4420", 00:11:10.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.221 "hdgst": false, 00:11:10.221 "ddgst": false 00:11:10.221 }, 00:11:10.221 "method": "bdev_nvme_attach_controller" 00:11:10.221 }' 00:11:10.221 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:10.221 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:10.221 "params": { 00:11:10.221 "name": "Nvme1", 00:11:10.221 "trtype": "tcp", 00:11:10.221 "traddr": "10.0.0.2", 00:11:10.221 "adrfam": "ipv4", 00:11:10.221 "trsvcid": "4420", 00:11:10.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.221 "hdgst": false, 00:11:10.221 "ddgst": false 00:11:10.221 }, 00:11:10.221 "method": "bdev_nvme_attach_controller" 00:11:10.221 }' 00:11:10.221 [2024-07-15 15:34:05.286335] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:10.221 [2024-07-15 15:34:05.286420] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:10.221 15:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74656 00:11:10.221 [2024-07-15 15:34:05.292073] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:10.221 [2024-07-15 15:34:05.292177] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:10.221 [2024-07-15 15:34:05.320265] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:10.221 [2024-07-15 15:34:05.320351] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:10.221 [2024-07-15 15:34:05.325913] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:10.221 [2024-07-15 15:34:05.326170] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:10.478 [2024-07-15 15:34:05.469034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.478 [2024-07-15 15:34:05.507489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.478 [2024-07-15 15:34:05.526003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:10.478 [2024-07-15 15:34:05.552676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.478 [2024-07-15 15:34:05.581696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:10.478 [2024-07-15 15:34:05.598814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.736 [2024-07-15 15:34:05.641938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:10.736 [2024-07-15 15:34:05.653028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:10.736 Running I/O for 1 seconds... 00:11:10.736 Running I/O for 1 seconds... 00:11:10.736 Running I/O for 1 seconds... 00:11:10.736 Running I/O for 1 seconds... 00:11:11.669 00:11:11.669 Latency(us) 00:11:11.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.669 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:11.669 Nvme1n1 : 1.02 6222.26 24.31 0.00 0.00 20382.83 9770.82 34078.72 00:11:11.669 =================================================================================================================== 00:11:11.669 Total : 6222.26 24.31 0.00 0.00 20382.83 9770.82 34078.72 00:11:11.669 00:11:11.669 Latency(us) 00:11:11.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.669 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:11.669 Nvme1n1 : 1.00 190326.53 743.46 0.00 0.00 669.79 275.55 808.03 00:11:11.669 =================================================================================================================== 00:11:11.669 Total : 190326.53 743.46 0.00 0.00 669.79 275.55 808.03 00:11:11.669 00:11:11.669 Latency(us) 00:11:11.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.669 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:11.669 Nvme1n1 : 1.01 8690.03 33.95 0.00 0.00 14660.60 6821.70 24665.37 00:11:11.669 =================================================================================================================== 00:11:11.669 Total : 8690.03 33.95 0.00 0.00 14660.60 6821.70 24665.37 00:11:11.669 00:11:11.669 Latency(us) 00:11:11.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.669 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:11.669 Nvme1n1 : 1.00 6291.18 24.57 0.00 0.00 20287.83 4796.04 48854.11 00:11:11.669 =================================================================================================================== 00:11:11.669 Total : 6291.18 24.57 0.00 0.00 20287.83 4796.04 48854.11 00:11:11.927 15:34:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74658 00:11:11.927 15:34:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74660 00:11:11.927 15:34:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74662 00:11:11.927 15:34:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.927 15:34:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.927 15:34:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.927 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.927 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:11.927 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:11.927 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:11.927 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:11.927 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:11.927 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:11.927 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:11.927 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:11.927 rmmod nvme_tcp 00:11:12.185 rmmod nvme_fabrics 00:11:12.185 rmmod nvme_keyring 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74618 ']' 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74618 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74618 ']' 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74618 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74618 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74618' 00:11:12.185 killing process with pid 74618 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74618 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74618 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:12.185 00:11:12.185 real 0m2.941s 00:11:12.185 user 0m13.399s 00:11:12.185 sys 0m1.685s 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.185 15:34:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.185 ************************************ 00:11:12.185 END TEST nvmf_bdev_io_wait 00:11:12.185 ************************************ 00:11:12.444 15:34:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:12.444 15:34:07 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:12.444 15:34:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:12.444 15:34:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.444 15:34:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.444 ************************************ 00:11:12.444 START TEST nvmf_queue_depth 00:11:12.444 ************************************ 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:12.444 * Looking for test storage... 00:11:12.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.444 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:12.445 Cannot find device "nvmf_tgt_br" 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.445 Cannot find device "nvmf_tgt_br2" 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:12.445 Cannot find device "nvmf_tgt_br" 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:12.445 Cannot find device "nvmf_tgt_br2" 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:12.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:12.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:12.445 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:12.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:11:12.703 00:11:12.703 --- 10.0.0.2 ping statistics --- 00:11:12.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.703 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:12.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:12.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:11:12.703 00:11:12.703 --- 10.0.0.3 ping statistics --- 00:11:12.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.703 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:12.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:12.703 00:11:12.703 --- 10.0.0.1 ping statistics --- 00:11:12.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.703 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=74863 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 74863 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 74863 ']' 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.703 15:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:12.962 [2024-07-15 15:34:07.877893] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:12.962 [2024-07-15 15:34:07.878027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.962 [2024-07-15 15:34:08.016601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.962 [2024-07-15 15:34:08.070673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.962 [2024-07-15 15:34:08.070784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.962 [2024-07-15 15:34:08.070795] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.962 [2024-07-15 15:34:08.070804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.962 [2024-07-15 15:34:08.070811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.962 [2024-07-15 15:34:08.070837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.898 [2024-07-15 15:34:08.866840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.898 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.899 Malloc0 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.899 [2024-07-15 15:34:08.929928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=74919 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 74919 /var/tmp/bdevperf.sock 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 74919 ']' 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.899 15:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.899 [2024-07-15 15:34:08.991639] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:13.899 [2024-07-15 15:34:08.991726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74919 ] 00:11:14.157 [2024-07-15 15:34:09.133663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.157 [2024-07-15 15:34:09.204320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.093 15:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.093 15:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:15.093 15:34:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:15.093 15:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.093 15:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.093 NVMe0n1 00:11:15.093 15:34:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.093 15:34:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:15.093 Running I/O for 10 seconds... 00:11:27.304 00:11:27.304 Latency(us) 00:11:27.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.304 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:27.304 Verification LBA range: start 0x0 length 0x4000 00:11:27.304 NVMe0n1 : 10.06 9600.58 37.50 0.00 0.00 106188.25 14120.03 72447.07 00:11:27.305 =================================================================================================================== 00:11:27.305 Total : 9600.58 37.50 0.00 0.00 106188.25 14120.03 72447.07 00:11:27.305 0 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 74919 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 74919 ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 74919 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74919 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:27.305 killing process with pid 74919 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74919' 00:11:27.305 Received shutdown signal, test time was about 10.000000 seconds 00:11:27.305 00:11:27.305 Latency(us) 00:11:27.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.305 =================================================================================================================== 00:11:27.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 74919 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 74919 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.305 rmmod nvme_tcp 00:11:27.305 rmmod nvme_fabrics 00:11:27.305 rmmod nvme_keyring 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 74863 ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 74863 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 74863 ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 74863 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74863 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:27.305 killing process with pid 74863 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74863' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 74863 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 74863 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:27.305 00:11:27.305 real 0m13.415s 00:11:27.305 user 0m23.326s 00:11:27.305 sys 0m1.945s 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.305 15:34:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:27.305 ************************************ 00:11:27.305 END TEST nvmf_queue_depth 00:11:27.305 ************************************ 00:11:27.305 15:34:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:27.305 15:34:20 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:27.305 15:34:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:27.305 15:34:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.305 15:34:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.305 ************************************ 00:11:27.305 START TEST nvmf_target_multipath 00:11:27.305 ************************************ 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:27.305 * Looking for test storage... 00:11:27.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:27.305 Cannot find device "nvmf_tgt_br" 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.305 Cannot find device "nvmf_tgt_br2" 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:27.305 Cannot find device "nvmf_tgt_br" 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:27.305 Cannot find device "nvmf_tgt_br2" 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:11:27.305 15:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:27.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:11:27.305 00:11:27.305 --- 10.0.0.2 ping statistics --- 00:11:27.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.305 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:27.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:27.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:27.305 00:11:27.305 --- 10.0.0.3 ping statistics --- 00:11:27.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.305 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:27.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:27.305 00:11:27.305 --- 10.0.0.1 ping statistics --- 00:11:27.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.305 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75250 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75250 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75250 ']' 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.305 15:34:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:27.305 [2024-07-15 15:34:21.344309] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:27.305 [2024-07-15 15:34:21.344981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.305 [2024-07-15 15:34:21.486992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.305 [2024-07-15 15:34:21.559771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.305 [2024-07-15 15:34:21.559830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.306 [2024-07-15 15:34:21.559844] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.306 [2024-07-15 15:34:21.559854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.306 [2024-07-15 15:34:21.559863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.306 [2024-07-15 15:34:21.561584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.306 [2024-07-15 15:34:21.561747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.306 [2024-07-15 15:34:21.562444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.306 [2024-07-15 15:34:21.562500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.306 15:34:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.306 15:34:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:11:27.306 15:34:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.306 15:34:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:27.306 15:34:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:27.306 15:34:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.306 15:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:27.563 [2024-07-15 15:34:22.674397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.822 15:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:28.080 Malloc0 00:11:28.080 15:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:28.337 15:34:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.337 15:34:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.596 [2024-07-15 15:34:23.707960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.854 15:34:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:28.854 [2024-07-15 15:34:23.936214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:28.854 15:34:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:29.112 15:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:29.371 15:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.371 15:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:11:29.371 15:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.371 15:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:29.371 15:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:31.272 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75393 00:11:31.273 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:31.532 15:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:31.532 [global] 00:11:31.532 thread=1 00:11:31.532 invalidate=1 00:11:31.532 rw=randrw 00:11:31.532 time_based=1 00:11:31.532 runtime=6 00:11:31.532 ioengine=libaio 00:11:31.532 direct=1 00:11:31.532 bs=4096 00:11:31.532 iodepth=128 00:11:31.532 norandommap=0 00:11:31.532 numjobs=1 00:11:31.532 00:11:31.532 verify_dump=1 00:11:31.532 verify_backlog=512 00:11:31.532 verify_state_save=0 00:11:31.532 do_verify=1 00:11:31.532 verify=crc32c-intel 00:11:31.532 [job0] 00:11:31.532 filename=/dev/nvme0n1 00:11:31.532 Could not set queue depth (nvme0n1) 00:11:31.532 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:31.532 fio-3.35 00:11:31.532 Starting 1 thread 00:11:32.467 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:32.726 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:32.984 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:32.984 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:32.984 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:32.984 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:32.984 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:32.984 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:32.984 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:32.985 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:32.985 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:32.985 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:32.985 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:32.985 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:32.985 15:34:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:33.919 15:34:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:33.919 15:34:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:33.919 15:34:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:33.919 15:34:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:34.177 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:34.460 15:34:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:35.399 15:34:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:35.399 15:34:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:35.399 15:34:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:35.399 15:34:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75393 00:11:37.931 00:11:37.931 job0: (groupid=0, jobs=1): err= 0: pid=75414: Mon Jul 15 15:34:32 2024 00:11:37.931 read: IOPS=10.7k, BW=41.7MiB/s (43.8MB/s)(251MiB/6006msec) 00:11:37.931 slat (usec): min=4, max=8885, avg=53.62, stdev=244.00 00:11:37.931 clat (usec): min=2727, max=17015, avg=8177.04, stdev=1219.00 00:11:37.931 lat (usec): min=2749, max=17047, avg=8230.66, stdev=1229.65 00:11:37.931 clat percentiles (usec): 00:11:37.931 | 1.00th=[ 5080], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7439], 00:11:37.931 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8291], 00:11:37.931 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10159], 00:11:37.931 | 99.00th=[11994], 99.50th=[12518], 99.90th=[14877], 99.95th=[15139], 00:11:37.931 | 99.99th=[16319] 00:11:37.931 bw ( KiB/s): min= 8120, max=28152, per=52.08%, avg=22266.45, stdev=6190.25, samples=11 00:11:37.931 iops : min= 2030, max= 7038, avg=5566.55, stdev=1547.55, samples=11 00:11:37.931 write: IOPS=6294, BW=24.6MiB/s (25.8MB/s)(132MiB/5360msec); 0 zone resets 00:11:37.931 slat (usec): min=14, max=3811, avg=66.27, stdev=173.29 00:11:37.931 clat (usec): min=2488, max=13558, avg=7024.41, stdev=990.05 00:11:37.931 lat (usec): min=2513, max=13581, avg=7090.68, stdev=994.38 00:11:37.931 clat percentiles (usec): 00:11:37.931 | 1.00th=[ 4015], 5.00th=[ 5276], 10.00th=[ 5997], 20.00th=[ 6456], 00:11:37.931 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7242], 00:11:37.931 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7963], 95.00th=[ 8291], 00:11:37.931 | 99.00th=[10159], 99.50th=[10814], 99.90th=[12387], 99.95th=[12649], 00:11:37.931 | 99.99th=[13173] 00:11:37.931 bw ( KiB/s): min= 8616, max=28168, per=88.53%, avg=22291.36, stdev=5924.51, samples=11 00:11:37.931 iops : min= 2154, max= 7042, avg=5572.82, stdev=1481.12, samples=11 00:11:37.931 lat (msec) : 4=0.42%, 10=95.34%, 20=4.24% 00:11:37.931 cpu : usr=5.33%, sys=21.77%, ctx=6212, majf=0, minf=96 00:11:37.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:37.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:37.931 issued rwts: total=64192,33741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:37.931 00:11:37.931 Run status group 0 (all jobs): 00:11:37.931 READ: bw=41.7MiB/s (43.8MB/s), 41.7MiB/s-41.7MiB/s (43.8MB/s-43.8MB/s), io=251MiB (263MB), run=6006-6006msec 00:11:37.931 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=132MiB (138MB), run=5360-5360msec 00:11:37.931 00:11:37.931 Disk stats (read/write): 00:11:37.931 nvme0n1: ios=63400/33015, merge=0/0, ticks=486462/217230, in_queue=703692, util=98.65% 00:11:37.931 15:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:37.931 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:38.190 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:38.190 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:38.190 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:38.190 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:38.449 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:38.449 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:38.449 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:38.449 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:38.449 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:38.449 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:38.449 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:38.449 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:11:38.449 15:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:39.385 15:34:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:39.385 15:34:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:39.385 15:34:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:39.385 15:34:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:39.385 15:34:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75538 00:11:39.385 15:34:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:39.385 15:34:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:39.385 [global] 00:11:39.385 thread=1 00:11:39.385 invalidate=1 00:11:39.385 rw=randrw 00:11:39.385 time_based=1 00:11:39.385 runtime=6 00:11:39.385 ioengine=libaio 00:11:39.385 direct=1 00:11:39.385 bs=4096 00:11:39.385 iodepth=128 00:11:39.385 norandommap=0 00:11:39.385 numjobs=1 00:11:39.385 00:11:39.385 verify_dump=1 00:11:39.385 verify_backlog=512 00:11:39.385 verify_state_save=0 00:11:39.385 do_verify=1 00:11:39.385 verify=crc32c-intel 00:11:39.385 [job0] 00:11:39.385 filename=/dev/nvme0n1 00:11:39.385 Could not set queue depth (nvme0n1) 00:11:39.385 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.385 fio-3.35 00:11:39.385 Starting 1 thread 00:11:40.319 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:40.577 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:40.836 15:34:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:41.880 15:34:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:41.880 15:34:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:41.880 15:34:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:41.880 15:34:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:42.138 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:42.396 15:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:43.330 15:34:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:43.330 15:34:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:43.330 15:34:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:43.330 15:34:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75538 00:11:45.861 00:11:45.861 job0: (groupid=0, jobs=1): err= 0: pid=75565: Mon Jul 15 15:34:40 2024 00:11:45.861 read: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(283MiB/6003msec) 00:11:45.861 slat (usec): min=5, max=7946, avg=42.27, stdev=208.10 00:11:45.861 clat (usec): min=595, max=14698, avg=7341.86, stdev=1579.69 00:11:45.861 lat (usec): min=648, max=14708, avg=7384.13, stdev=1597.53 00:11:45.861 clat percentiles (usec): 00:11:45.861 | 1.00th=[ 3621], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5932], 00:11:45.861 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7701], 00:11:45.861 | 70.00th=[ 8029], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[ 9634], 00:11:45.861 | 99.00th=[11600], 99.50th=[11994], 99.90th=[13042], 99.95th=[14091], 00:11:45.861 | 99.99th=[14222] 00:11:45.861 bw ( KiB/s): min=11168, max=43496, per=53.64%, avg=25930.18, stdev=9407.08, samples=11 00:11:45.861 iops : min= 2792, max=10874, avg=6482.55, stdev=2351.77, samples=11 00:11:45.861 write: IOPS=7258, BW=28.4MiB/s (29.7MB/s)(149MiB/5254msec); 0 zone resets 00:11:45.861 slat (usec): min=14, max=5555, avg=52.47, stdev=141.49 00:11:45.861 clat (usec): min=379, max=13190, avg=6015.19, stdev=1595.44 00:11:45.861 lat (usec): min=453, max=13215, avg=6067.66, stdev=1609.83 00:11:45.861 clat percentiles (usec): 00:11:45.861 | 1.00th=[ 2573], 5.00th=[ 3294], 10.00th=[ 3752], 20.00th=[ 4359], 00:11:45.861 | 30.00th=[ 5014], 40.00th=[ 5997], 50.00th=[ 6456], 60.00th=[ 6783], 00:11:45.861 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7701], 95.00th=[ 7963], 00:11:45.861 | 99.00th=[ 9765], 99.50th=[10552], 99.90th=[12125], 99.95th=[12518], 00:11:45.861 | 99.99th=[13042] 00:11:45.861 bw ( KiB/s): min=11496, max=42792, per=89.25%, avg=25913.45, stdev=9198.25, samples=11 00:11:45.861 iops : min= 2874, max=10698, avg=6478.36, stdev=2299.56, samples=11 00:11:45.861 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:11:45.861 lat (msec) : 2=0.17%, 4=5.80%, 10=91.32%, 20=2.67% 00:11:45.861 cpu : usr=6.16%, sys=24.42%, ctx=7141, majf=0, minf=108 00:11:45.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:45.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.861 issued rwts: total=72547,38135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.861 00:11:45.861 Run status group 0 (all jobs): 00:11:45.861 READ: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=283MiB (297MB), run=6003-6003msec 00:11:45.861 WRITE: bw=28.4MiB/s (29.7MB/s), 28.4MiB/s-28.4MiB/s (29.7MB/s-29.7MB/s), io=149MiB (156MB), run=5254-5254msec 00:11:45.861 00:11:45.861 Disk stats (read/write): 00:11:45.861 nvme0n1: ios=71030/38135, merge=0/0, ticks=485881/211964, in_queue=697845, util=98.63% 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:45.861 15:34:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:46.119 rmmod nvme_tcp 00:11:46.119 rmmod nvme_fabrics 00:11:46.119 rmmod nvme_keyring 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75250 ']' 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75250 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75250 ']' 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75250 00:11:46.119 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:11:46.120 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.120 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75250 00:11:46.120 killing process with pid 75250 00:11:46.120 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.120 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.120 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75250' 00:11:46.120 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75250 00:11:46.120 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75250 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:46.378 ************************************ 00:11:46.378 END TEST nvmf_target_multipath 00:11:46.378 ************************************ 00:11:46.378 00:11:46.378 real 0m20.497s 00:11:46.378 user 1m19.977s 00:11:46.378 sys 0m7.012s 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.378 15:34:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:46.378 15:34:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:46.378 15:34:41 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:46.378 15:34:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:46.378 15:34:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.378 15:34:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:46.378 ************************************ 00:11:46.378 START TEST nvmf_zcopy 00:11:46.378 ************************************ 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:46.378 * Looking for test storage... 00:11:46.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:46.378 15:34:41 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:46.379 Cannot find device "nvmf_tgt_br" 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:11:46.379 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.636 Cannot find device "nvmf_tgt_br2" 00:11:46.636 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:11:46.636 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:46.636 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:46.636 Cannot find device "nvmf_tgt_br" 00:11:46.636 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:11:46.636 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:46.636 Cannot find device "nvmf_tgt_br2" 00:11:46.636 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:46.636 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:46.636 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:46.636 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:46.637 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:46.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:11:46.896 00:11:46.896 --- 10.0.0.2 ping statistics --- 00:11:46.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.896 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:46.896 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:46.896 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:46.896 00:11:46.896 --- 10.0.0.3 ping statistics --- 00:11:46.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.896 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:46.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:46.896 00:11:46.896 --- 10.0.0.1 ping statistics --- 00:11:46.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.896 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=75846 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 75846 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 75846 ']' 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.896 15:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.896 [2024-07-15 15:34:41.875387] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:46.896 [2024-07-15 15:34:41.875698] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.896 [2024-07-15 15:34:42.014513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.154 [2024-07-15 15:34:42.084742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.154 [2024-07-15 15:34:42.085027] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.154 [2024-07-15 15:34:42.085128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.154 [2024-07-15 15:34:42.085242] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.154 [2024-07-15 15:34:42.085319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.154 [2024-07-15 15:34:42.085424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.091 [2024-07-15 15:34:42.932522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.091 [2024-07-15 15:34:42.948660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.091 malloc0 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:48.091 { 00:11:48.091 "params": { 00:11:48.091 "name": "Nvme$subsystem", 00:11:48.091 "trtype": "$TEST_TRANSPORT", 00:11:48.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.091 "adrfam": "ipv4", 00:11:48.091 "trsvcid": "$NVMF_PORT", 00:11:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.091 "hdgst": ${hdgst:-false}, 00:11:48.091 "ddgst": ${ddgst:-false} 00:11:48.091 }, 00:11:48.091 "method": "bdev_nvme_attach_controller" 00:11:48.091 } 00:11:48.091 EOF 00:11:48.091 )") 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:48.091 15:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:48.091 "params": { 00:11:48.091 "name": "Nvme1", 00:11:48.091 "trtype": "tcp", 00:11:48.091 "traddr": "10.0.0.2", 00:11:48.091 "adrfam": "ipv4", 00:11:48.091 "trsvcid": "4420", 00:11:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.091 "hdgst": false, 00:11:48.091 "ddgst": false 00:11:48.091 }, 00:11:48.091 "method": "bdev_nvme_attach_controller" 00:11:48.091 }' 00:11:48.091 [2024-07-15 15:34:43.040176] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:48.091 [2024-07-15 15:34:43.040273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75897 ] 00:11:48.091 [2024-07-15 15:34:43.182111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.350 [2024-07-15 15:34:43.255451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.350 Running I/O for 10 seconds... 00:11:58.321 00:11:58.321 Latency(us) 00:11:58.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.321 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:58.321 Verification LBA range: start 0x0 length 0x1000 00:11:58.321 Nvme1n1 : 10.01 6374.97 49.80 0.00 0.00 20012.35 577.16 31457.28 00:11:58.321 =================================================================================================================== 00:11:58.321 Total : 6374.97 49.80 0.00 0.00 20012.35 577.16 31457.28 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76008 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:58.580 { 00:11:58.580 "params": { 00:11:58.580 "name": "Nvme$subsystem", 00:11:58.580 "trtype": "$TEST_TRANSPORT", 00:11:58.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:58.580 "adrfam": "ipv4", 00:11:58.580 "trsvcid": "$NVMF_PORT", 00:11:58.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:58.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:58.580 "hdgst": ${hdgst:-false}, 00:11:58.580 "ddgst": ${ddgst:-false} 00:11:58.580 }, 00:11:58.580 "method": "bdev_nvme_attach_controller" 00:11:58.580 } 00:11:58.580 EOF 00:11:58.580 )") 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:58.580 [2024-07-15 15:34:53.583372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.583414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:58.580 15:34:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:58.580 "params": { 00:11:58.580 "name": "Nvme1", 00:11:58.580 "trtype": "tcp", 00:11:58.580 "traddr": "10.0.0.2", 00:11:58.580 "adrfam": "ipv4", 00:11:58.580 "trsvcid": "4420", 00:11:58.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.580 "hdgst": false, 00:11:58.580 "ddgst": false 00:11:58.580 }, 00:11:58.580 "method": "bdev_nvme_attach_controller" 00:11:58.580 }' 00:11:58.580 [2024-07-15 15:34:53.595354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.595378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.603368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.603408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.615360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.615384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.627377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.627401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.634347] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:58.580 [2024-07-15 15:34:53.634429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76008 ] 00:11:58.580 [2024-07-15 15:34:53.639383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.639411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.647357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.647383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.659382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.659408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.671378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.671396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.683417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.683459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.695386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.695409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.580 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.580 [2024-07-15 15:34:53.707384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.580 [2024-07-15 15:34:53.707406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.840 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.840 [2024-07-15 15:34:53.719385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.840 [2024-07-15 15:34:53.719406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.840 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.840 [2024-07-15 15:34:53.731386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.840 [2024-07-15 15:34:53.731407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.840 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.840 [2024-07-15 15:34:53.743391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.840 [2024-07-15 15:34:53.743412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.840 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.755421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.755445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.767422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.767446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 [2024-07-15 15:34:53.771098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.779482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.779565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.791451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.791482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.803438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.803463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.815450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.815480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.827448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.827472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 [2024-07-15 15:34:53.830682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.839438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.839463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.851475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.851553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.863470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.863506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.875474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.875506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.887465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.887504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.899466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.899502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.911459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.911484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.923456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.923480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.935460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.935483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.947460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.947484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 [2024-07-15 15:34:53.959470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-07-15 15:34:53.959495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.841 Running I/O for 5 seconds... 00:11:59.101 [2024-07-15 15:34:53.971494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:53.971517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:53.988627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:53.988685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.005636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.005681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.020709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.020741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.037037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.037082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.054018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.054077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.068718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.068763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.085753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.085786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.102237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.102283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.117270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.117316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.133379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.133425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.152307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.152374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.166504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.166580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.183552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.183620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.198552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.198596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.101 [2024-07-15 15:34:54.214665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-07-15 15:34:54.214709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.231689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.231734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.247328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.247389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.264528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.264579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.280605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.280640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.298143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.298189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.313584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.313625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.329084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.329129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.339371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.339414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.353783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.353829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.364170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.364218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.378219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.378285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.393036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.393110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.403306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.403354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.418012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.418058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.430412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.430459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.446476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.446521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.465079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.465126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.361 [2024-07-15 15:34:54.479673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.361 [2024-07-15 15:34:54.479720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.361 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.497052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.497101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.513469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.513516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.530248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.530278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.547087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.547132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.562603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.562678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.573325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.573369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.588045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.588090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.599101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.599158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.613917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.613963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.629411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.629455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.645907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.645954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.662808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.662841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.678883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.678915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.695532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.695605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.711415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.711459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.722339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.722382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.621 [2024-07-15 15:34:54.736591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.621 [2024-07-15 15:34:54.736618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.621 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.881 [2024-07-15 15:34:54.752062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.881 [2024-07-15 15:34:54.752106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.881 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.881 [2024-07-15 15:34:54.768623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.881 [2024-07-15 15:34:54.768685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.881 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.881 [2024-07-15 15:34:54.784768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.784827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.800688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.800735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.819030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.819061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.834462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.834507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.846451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.846495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.863705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.863748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.876967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.877012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.893842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.893889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.909209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.909254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.927119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.927168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.943342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.943388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.960417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.960485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.977207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.977268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:54.993515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:54.993595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.882 [2024-07-15 15:34:55.004723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.882 [2024-07-15 15:34:55.004755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.882 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.020122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.020167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.034868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.034902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.051316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.051359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.068672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.068716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.083642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.083673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.099328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.099372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.116605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.116649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.133430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.133476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.149107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.149152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.142 [2024-07-15 15:34:55.165494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.142 [2024-07-15 15:34:55.165565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.142 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.143 [2024-07-15 15:34:55.183511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.143 [2024-07-15 15:34:55.183564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.143 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.143 [2024-07-15 15:34:55.197713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.143 [2024-07-15 15:34:55.197758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.143 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.143 [2024-07-15 15:34:55.213311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.143 [2024-07-15 15:34:55.213357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.143 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.143 [2024-07-15 15:34:55.230453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.143 [2024-07-15 15:34:55.230497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.143 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.143 [2024-07-15 15:34:55.247190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.143 [2024-07-15 15:34:55.247224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.143 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.143 [2024-07-15 15:34:55.263353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.143 [2024-07-15 15:34:55.263387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.143 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.273914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.273963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.288349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.288382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.306499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.306562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.321627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.321691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.337337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.337374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.352809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.352843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.368371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.368405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.384529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.384588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.400112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.400145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.410610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.410658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.425762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.425797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.437031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.437067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.452832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.452871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.469003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.469067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.486613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.486677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.501979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.502047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.403 [2024-07-15 15:34:55.519176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.403 [2024-07-15 15:34:55.519212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.403 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.663 [2024-07-15 15:34:55.534009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.663 [2024-07-15 15:34:55.534043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.663 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.663 [2024-07-15 15:34:55.551670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.663 [2024-07-15 15:34:55.551702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.663 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.663 [2024-07-15 15:34:55.566219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.663 [2024-07-15 15:34:55.566253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.663 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.663 [2024-07-15 15:34:55.581833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.663 [2024-07-15 15:34:55.581866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.663 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.663 [2024-07-15 15:34:55.598589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.663 [2024-07-15 15:34:55.598682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.663 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.663 [2024-07-15 15:34:55.615325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.663 [2024-07-15 15:34:55.615378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.631500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.631560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.647774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.647807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.666034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.666068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.680846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.680880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.690783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.690819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.705447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.705480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.720973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.721005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.731167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.731199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.745943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.745976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.763676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.763709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.664 [2024-07-15 15:34:55.778034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.664 [2024-07-15 15:34:55.778067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.664 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.795744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.795777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.812098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.812133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.827703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.827737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.844479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.844512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.859840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.859880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.874317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.874350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.890012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.890044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.906798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.906833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.923708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.923745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.940101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.940134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.955448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.955481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.971048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.971112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.982983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.983019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:55.999601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:55.999639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:56.014796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:56.014838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:56.030722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:56.030782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.923 [2024-07-15 15:34:56.047888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.923 [2024-07-15 15:34:56.047926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.923 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.182 [2024-07-15 15:34:56.063471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.182 [2024-07-15 15:34:56.063507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.182 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.182 [2024-07-15 15:34:56.080225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.182 [2024-07-15 15:34:56.080262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.182 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.182 [2024-07-15 15:34:56.095308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.182 [2024-07-15 15:34:56.095343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.182 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.182 [2024-07-15 15:34:56.111683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.182 [2024-07-15 15:34:56.111731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.182 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.182 [2024-07-15 15:34:56.128419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.182 [2024-07-15 15:34:56.128452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.182 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.182 [2024-07-15 15:34:56.144811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.182 [2024-07-15 15:34:56.144845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.182 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.182 [2024-07-15 15:34:56.161563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.182 [2024-07-15 15:34:56.161611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.182 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.182 [2024-07-15 15:34:56.177051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.182 [2024-07-15 15:34:56.177086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.182 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.182 [2024-07-15 15:34:56.193173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.182 [2024-07-15 15:34:56.193207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.183 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.183 [2024-07-15 15:34:56.209989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.183 [2024-07-15 15:34:56.210053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.183 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.183 [2024-07-15 15:34:56.225192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.183 [2024-07-15 15:34:56.225226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.183 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.183 [2024-07-15 15:34:56.241008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.183 [2024-07-15 15:34:56.241058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.183 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.183 [2024-07-15 15:34:56.258315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.183 [2024-07-15 15:34:56.258348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.183 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.183 [2024-07-15 15:34:56.273935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.183 [2024-07-15 15:34:56.273971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.183 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.183 [2024-07-15 15:34:56.288974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.183 [2024-07-15 15:34:56.289024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.183 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.183 [2024-07-15 15:34:56.304155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.183 [2024-07-15 15:34:56.304189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.183 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.442 [2024-07-15 15:34:56.314834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.442 [2024-07-15 15:34:56.314871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.442 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.442 [2024-07-15 15:34:56.329713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.442 [2024-07-15 15:34:56.329747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.442 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.442 [2024-07-15 15:34:56.346663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.442 [2024-07-15 15:34:56.346698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.442 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.442 [2024-07-15 15:34:56.363386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.442 [2024-07-15 15:34:56.363420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.442 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.442 [2024-07-15 15:34:56.381598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.442 [2024-07-15 15:34:56.381632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.442 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.442 [2024-07-15 15:34:56.397001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.442 [2024-07-15 15:34:56.397064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.442 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.442 [2024-07-15 15:34:56.413800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.442 [2024-07-15 15:34:56.413835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.442 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.442 [2024-07-15 15:34:56.431167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.442 [2024-07-15 15:34:56.431201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.442 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.442 [2024-07-15 15:34:56.446522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.443 [2024-07-15 15:34:56.446580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.443 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.443 [2024-07-15 15:34:56.463993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.443 [2024-07-15 15:34:56.464043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.443 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.443 [2024-07-15 15:34:56.480321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.443 [2024-07-15 15:34:56.480357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.443 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.443 [2024-07-15 15:34:56.496807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.443 [2024-07-15 15:34:56.496845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.443 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.443 [2024-07-15 15:34:56.512776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.443 [2024-07-15 15:34:56.512813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.443 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.443 [2024-07-15 15:34:56.529069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.443 [2024-07-15 15:34:56.529102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.443 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.443 [2024-07-15 15:34:56.546244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.443 [2024-07-15 15:34:56.546278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.443 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.443 [2024-07-15 15:34:56.561975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.443 [2024-07-15 15:34:56.562027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.443 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.573237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.573287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.702 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.588403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.588441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.702 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.605652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.605704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.702 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.622005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.622083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.702 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.638488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.638549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.702 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.655169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.655202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.702 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.671728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.671764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.702 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.686967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.687005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.702 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.698915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.698966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.702 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.702 [2024-07-15 15:34:56.714937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.702 [2024-07-15 15:34:56.714974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.703 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.703 [2024-07-15 15:34:56.729996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.703 [2024-07-15 15:34:56.730043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.703 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.703 [2024-07-15 15:34:56.745346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.703 [2024-07-15 15:34:56.745379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.703 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.703 [2024-07-15 15:34:56.756074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.703 [2024-07-15 15:34:56.756109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.703 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.703 [2024-07-15 15:34:56.771585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.703 [2024-07-15 15:34:56.771649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.703 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.703 [2024-07-15 15:34:56.787781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.703 [2024-07-15 15:34:56.787816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.703 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.703 [2024-07-15 15:34:56.804129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.703 [2024-07-15 15:34:56.804163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.703 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.703 [2024-07-15 15:34:56.820818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.703 [2024-07-15 15:34:56.820854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.703 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.962 [2024-07-15 15:34:56.837338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.962 [2024-07-15 15:34:56.837372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.962 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.962 [2024-07-15 15:34:56.854523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.962 [2024-07-15 15:34:56.854582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.962 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:56.870284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:56.870317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:56.880752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:56.880786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:56.895658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:56.895691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:56.911674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:56.911737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:56.927689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:56.927721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:56.942979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:56.943030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:56.953456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:56.953489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:56.969213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:56.969246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:56.986007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:56.986070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:57.004308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:57.004343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:57.019167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:57.019199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:57.029613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:57.029646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:57.044859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:57.044894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:57.055445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:57.055476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:57.069851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:57.069884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.963 [2024-07-15 15:34:57.084879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.963 [2024-07-15 15:34:57.084926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.963 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.096513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.096569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.110912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.110948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.128398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.128433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.144139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.144184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.161515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.161573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.177890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.177950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.193495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.193561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.209336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.209383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.219923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.219969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.234722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.234777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.251153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.251212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.266851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.266894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.276961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.277006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.290515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.290568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.224 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.224 [2024-07-15 15:34:57.306274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.224 [2024-07-15 15:34:57.306319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.225 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.225 [2024-07-15 15:34:57.316836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.225 [2024-07-15 15:34:57.316870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.225 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.225 [2024-07-15 15:34:57.330644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.225 [2024-07-15 15:34:57.330687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.225 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.225 [2024-07-15 15:34:57.348231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.225 [2024-07-15 15:34:57.348291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.225 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.362828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.362863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.504 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.373248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.373293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.504 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.388037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.388082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.504 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.404892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.404937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.504 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.422187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.422235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.504 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.437527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.437589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.504 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.447549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.447606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.504 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.463462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.463506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.504 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.480205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.480252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.504 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.504 [2024-07-15 15:34:57.496893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.504 [2024-07-15 15:34:57.496941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.505 [2024-07-15 15:34:57.512091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.505 [2024-07-15 15:34:57.512135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.505 [2024-07-15 15:34:57.522242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.505 [2024-07-15 15:34:57.522289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.505 [2024-07-15 15:34:57.537488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.505 [2024-07-15 15:34:57.537533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.505 [2024-07-15 15:34:57.548777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.505 [2024-07-15 15:34:57.548806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.505 [2024-07-15 15:34:57.565790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.505 [2024-07-15 15:34:57.565819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.505 [2024-07-15 15:34:57.579955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.505 [2024-07-15 15:34:57.580000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.505 [2024-07-15 15:34:57.596620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.505 [2024-07-15 15:34:57.596665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.505 [2024-07-15 15:34:57.611167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.505 [2024-07-15 15:34:57.611211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.505 [2024-07-15 15:34:57.627457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.505 [2024-07-15 15:34:57.627504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.505 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.642629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.642707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.652807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.652852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.666985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.667030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.682018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.682063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.697259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.697304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.707034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.707093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.721844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.721888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.736750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.736782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.752032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.752075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.763592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.763636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.780615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.780670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.794684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.794737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.811765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.811808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.827304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.827348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.843800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.843832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.860478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.860523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.875717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.875747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.765 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.765 [2024-07-15 15:34:57.891651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.765 [2024-07-15 15:34:57.891690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:57.907450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:57.907494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:57.922964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:57.922994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:57.939984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:57.940030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:57.956161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:57.956206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:57.974296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:57.974340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:57.988568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:57.988612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.003988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.004032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.020831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.020875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.036914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.036958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.054564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.054607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.070308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.070353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.087880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.087924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.102394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.102439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.118078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.118122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.136245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.136289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.026 [2024-07-15 15:34:58.151157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.026 [2024-07-15 15:34:58.151203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.026 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.167152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.167196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.183851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.183897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.200449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.200495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.216294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.216340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.235445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.235491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.250789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.250826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.266914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.266948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.276749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.276804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.291358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.291403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.308967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.309013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.323821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.323852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.341773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.341803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.355698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.355743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.372671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.372715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.388277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.388321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.286 [2024-07-15 15:34:58.405902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.286 [2024-07-15 15:34:58.405962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.286 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.545 [2024-07-15 15:34:58.421151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.545 [2024-07-15 15:34:58.421196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.545 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.545 [2024-07-15 15:34:58.432290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.545 [2024-07-15 15:34:58.432334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.545 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.449778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.449810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.464685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.464732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.480736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.480781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.497683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.497727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.514279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.514327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.530373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.530419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.547229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.547274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.564358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.564404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.580573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.580599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.598049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.598096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.614317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.614363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.629834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.629866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.641197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.641241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.658201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.658246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.546 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.546 [2024-07-15 15:34:58.673028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.546 [2024-07-15 15:34:58.673074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.688026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.688071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.704055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.704101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.719867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.719899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.730522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.730575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.746112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.746158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.763515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.763574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.779710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.779760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.796462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.796508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.812574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.812631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.829248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.829292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.844826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.844856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.862200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.862245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.877437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.877486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.887735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.887780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.901768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.901812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.917686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.917730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.806 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.806 [2024-07-15 15:34:58.934312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.806 [2024-07-15 15:34:58.934358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.065 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.065 [2024-07-15 15:34:58.951089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.065 [2024-07-15 15:34:58.951118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.065 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.065 [2024-07-15 15:34:58.967794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.065 [2024-07-15 15:34:58.967838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.065 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.065 00:12:04.065 Latency(us) 00:12:04.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.065 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:04.065 Nvme1n1 : 5.01 11893.59 92.92 0.00 0.00 10749.49 4379.00 21090.68 00:12:04.065 =================================================================================================================== 00:12:04.065 Total : 11893.59 92.92 0.00 0.00 10749.49 4379.00 21090.68 00:12:04.065 [2024-07-15 15:34:58.979470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.065 [2024-07-15 15:34:58.979511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:58.991467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:58.991509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.003513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.003576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.015528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.015587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.027561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.027624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.039515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.039576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.051520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.051578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.063505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.063559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.075489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.075528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.087517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.087576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.099502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.099552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.111517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.111572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.123517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.123574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 [2024-07-15 15:34:59.135499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.066 [2024-07-15 15:34:59.135547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.066 2024/07/15 15:34:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.066 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76008) - No such process 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76008 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.066 delay0 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.066 15:34:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:04.326 [2024-07-15 15:34:59.339292] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:10.916 Initializing NVMe Controllers 00:12:10.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:10.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:10.916 Initialization complete. Launching workers. 00:12:10.916 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 53 00:12:10.916 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 340, failed to submit 33 00:12:10.916 success 149, unsuccess 191, failed 0 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.916 rmmod nvme_tcp 00:12:10.916 rmmod nvme_fabrics 00:12:10.916 rmmod nvme_keyring 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 75846 ']' 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 75846 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 75846 ']' 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 75846 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75846 00:12:10.916 killing process with pid 75846 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75846' 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 75846 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 75846 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:10.916 00:12:10.916 real 0m24.341s 00:12:10.916 user 0m39.302s 00:12:10.916 sys 0m6.638s 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.916 ************************************ 00:12:10.916 15:35:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 END TEST nvmf_zcopy 00:12:10.916 ************************************ 00:12:10.916 15:35:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:10.916 15:35:05 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:10.916 15:35:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:10.916 15:35:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.916 15:35:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 ************************************ 00:12:10.916 START TEST nvmf_nmic 00:12:10.916 ************************************ 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:10.916 * Looking for test storage... 00:12:10.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.916 15:35:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:10.917 Cannot find device "nvmf_tgt_br" 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.917 Cannot find device "nvmf_tgt_br2" 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:10.917 Cannot find device "nvmf_tgt_br" 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:10.917 Cannot find device "nvmf_tgt_br2" 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:10.917 15:35:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:10.917 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.917 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:10.917 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.917 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:10.917 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:10.917 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:10.917 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:10.917 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:10.917 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:11.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:11.177 00:12:11.177 --- 10.0.0.2 ping statistics --- 00:12:11.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.177 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:11.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:11.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:11.177 00:12:11.177 --- 10.0.0.3 ping statistics --- 00:12:11.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.177 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:11.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:11.177 00:12:11.177 --- 10.0.0.1 ping statistics --- 00:12:11.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.177 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76339 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76339 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76339 ']' 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.177 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.177 [2024-07-15 15:35:06.284015] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:11.177 [2024-07-15 15:35:06.284110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.436 [2024-07-15 15:35:06.422719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.436 [2024-07-15 15:35:06.481639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.436 [2024-07-15 15:35:06.481883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.436 [2024-07-15 15:35:06.481954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.436 [2024-07-15 15:35:06.482088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.436 [2024-07-15 15:35:06.482153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.436 [2024-07-15 15:35:06.482330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.436 [2024-07-15 15:35:06.482414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.436 [2024-07-15 15:35:06.482719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.437 [2024-07-15 15:35:06.482724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.437 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.437 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:12:11.437 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 [2024-07-15 15:35:06.615857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 Malloc0 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 [2024-07-15 15:35:06.676813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:11.696 test case1: single bdev can't be used in multiple subsystems 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 [2024-07-15 15:35:06.700645] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:11.696 [2024-07-15 15:35:06.700796] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:11.696 [2024-07-15 15:35:06.700889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.696 2024/07/15 15:35:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:11.696 request: 00:12:11.696 { 00:12:11.696 "method": "nvmf_subsystem_add_ns", 00:12:11.696 "params": { 00:12:11.696 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:11.696 "namespace": { 00:12:11.696 "bdev_name": "Malloc0", 00:12:11.696 "no_auto_visible": false 00:12:11.696 } 00:12:11.696 } 00:12:11.696 } 00:12:11.696 Got JSON-RPC error response 00:12:11.696 GoRPCClient: error on JSON-RPC call 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:11.696 Adding namespace failed - expected result. 00:12:11.696 test case2: host connect to nvmf target in multiple paths 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.696 [2024-07-15 15:35:06.712779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.696 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.955 15:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:11.955 15:35:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:11.955 15:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:11.955 15:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.955 15:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:11.955 15:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.488 15:35:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.488 15:35:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.488 15:35:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.488 15:35:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.488 15:35:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.488 15:35:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:14.488 15:35:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:14.488 [global] 00:12:14.488 thread=1 00:12:14.488 invalidate=1 00:12:14.488 rw=write 00:12:14.488 time_based=1 00:12:14.488 runtime=1 00:12:14.488 ioengine=libaio 00:12:14.488 direct=1 00:12:14.488 bs=4096 00:12:14.488 iodepth=1 00:12:14.488 norandommap=0 00:12:14.488 numjobs=1 00:12:14.488 00:12:14.488 verify_dump=1 00:12:14.488 verify_backlog=512 00:12:14.488 verify_state_save=0 00:12:14.488 do_verify=1 00:12:14.488 verify=crc32c-intel 00:12:14.488 [job0] 00:12:14.488 filename=/dev/nvme0n1 00:12:14.488 Could not set queue depth (nvme0n1) 00:12:14.488 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.488 fio-3.35 00:12:14.488 Starting 1 thread 00:12:15.424 00:12:15.424 job0: (groupid=0, jobs=1): err= 0: pid=76430: Mon Jul 15 15:35:10 2024 00:12:15.424 read: IOPS=3206, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:12:15.424 slat (nsec): min=12622, max=60330, avg=16144.38, stdev=5375.53 00:12:15.424 clat (usec): min=118, max=689, avg=147.29, stdev=20.41 00:12:15.424 lat (usec): min=131, max=711, avg=163.44, stdev=21.50 00:12:15.424 clat percentiles (usec): 00:12:15.424 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 133], 00:12:15.424 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 149], 00:12:15.424 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 180], 00:12:15.424 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 347], 99.95th=[ 404], 00:12:15.424 | 99.99th=[ 693] 00:12:15.424 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:15.424 slat (nsec): min=17834, max=98603, avg=24231.26, stdev=7818.47 00:12:15.424 clat (usec): min=78, max=765, avg=105.12, stdev=23.00 00:12:15.424 lat (usec): min=99, max=787, avg=129.35, stdev=25.11 00:12:15.424 clat percentiles (usec): 00:12:15.424 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 92], 00:12:15.424 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 100], 60.00th=[ 103], 00:12:15.424 | 70.00th=[ 110], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 137], 00:12:15.424 | 99.00th=[ 153], 99.50th=[ 178], 99.90th=[ 343], 99.95th=[ 502], 00:12:15.424 | 99.99th=[ 766] 00:12:15.424 bw ( KiB/s): min=14424, max=14424, per=100.00%, avg=14424.00, stdev= 0.00, samples=1 00:12:15.424 iops : min= 3606, max= 3606, avg=3606.00, stdev= 0.00, samples=1 00:12:15.424 lat (usec) : 100=26.88%, 250=72.83%, 500=0.25%, 750=0.03%, 1000=0.01% 00:12:15.424 cpu : usr=2.20%, sys=10.60%, ctx=6794, majf=0, minf=2 00:12:15.424 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.424 issued rwts: total=3210,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.424 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.424 00:12:15.424 Run status group 0 (all jobs): 00:12:15.424 READ: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:12:15.424 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:12:15.424 00:12:15.424 Disk stats (read/write): 00:12:15.424 nvme0n1: ios=3041/3072, merge=0/0, ticks=494/379, in_queue=873, util=90.98% 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.424 rmmod nvme_tcp 00:12:15.424 rmmod nvme_fabrics 00:12:15.424 rmmod nvme_keyring 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76339 ']' 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76339 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76339 ']' 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76339 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:15.424 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76339 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:15.682 killing process with pid 76339 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76339' 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76339 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76339 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:15.682 00:12:15.682 real 0m5.012s 00:12:15.682 user 0m16.319s 00:12:15.682 sys 0m1.346s 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.682 15:35:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:15.682 ************************************ 00:12:15.682 END TEST nvmf_nmic 00:12:15.682 ************************************ 00:12:15.682 15:35:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:15.682 15:35:10 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:15.682 15:35:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:15.682 15:35:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.682 15:35:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.941 ************************************ 00:12:15.941 START TEST nvmf_fio_target 00:12:15.941 ************************************ 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:15.941 * Looking for test storage... 00:12:15.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:15.941 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:15.942 Cannot find device "nvmf_tgt_br" 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.942 Cannot find device "nvmf_tgt_br2" 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:15.942 Cannot find device "nvmf_tgt_br" 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:15.942 Cannot find device "nvmf_tgt_br2" 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:12:15.942 15:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:15.942 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:15.942 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.942 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:15.942 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.942 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:15.942 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:16.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:16.200 00:12:16.200 --- 10.0.0.2 ping statistics --- 00:12:16.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.200 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:16.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:16.200 00:12:16.200 --- 10.0.0.3 ping statistics --- 00:12:16.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.200 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:16.200 00:12:16.200 --- 10.0.0.1 ping statistics --- 00:12:16.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.200 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:16.200 15:35:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76607 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76607 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76607 ']' 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.201 15:35:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.459 [2024-07-15 15:35:11.359918] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:16.459 [2024-07-15 15:35:11.360024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.459 [2024-07-15 15:35:11.499971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.459 [2024-07-15 15:35:11.559775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.459 [2024-07-15 15:35:11.559824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.459 [2024-07-15 15:35:11.559851] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.459 [2024-07-15 15:35:11.559859] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.459 [2024-07-15 15:35:11.559866] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.459 [2024-07-15 15:35:11.560648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.459 [2024-07-15 15:35:11.560745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.459 [2024-07-15 15:35:11.560817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.459 [2024-07-15 15:35:11.560821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.393 15:35:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.393 15:35:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:12:17.393 15:35:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.393 15:35:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.393 15:35:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.393 15:35:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.393 15:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:17.651 [2024-07-15 15:35:12.532422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.651 15:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:17.909 15:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:17.909 15:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:18.167 15:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:18.167 15:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:18.425 15:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:18.425 15:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:18.683 15:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:18.683 15:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:18.941 15:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:19.199 15:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:19.199 15:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:19.765 15:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:19.765 15:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:19.765 15:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:19.765 15:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:20.023 15:35:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.281 15:35:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:20.281 15:35:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.539 15:35:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:20.539 15:35:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.798 15:35:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.056 [2024-07-15 15:35:16.068247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.056 15:35:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:21.314 15:35:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:21.573 15:35:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.832 15:35:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:21.832 15:35:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:21.832 15:35:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.832 15:35:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:21.832 15:35:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:21.832 15:35:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:23.735 15:35:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:23.735 15:35:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:23.735 15:35:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.735 15:35:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:23.735 15:35:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.735 15:35:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:23.735 15:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:23.735 [global] 00:12:23.735 thread=1 00:12:23.735 invalidate=1 00:12:23.735 rw=write 00:12:23.735 time_based=1 00:12:23.735 runtime=1 00:12:23.735 ioengine=libaio 00:12:23.735 direct=1 00:12:23.735 bs=4096 00:12:23.735 iodepth=1 00:12:23.735 norandommap=0 00:12:23.735 numjobs=1 00:12:23.735 00:12:23.735 verify_dump=1 00:12:23.735 verify_backlog=512 00:12:23.735 verify_state_save=0 00:12:23.735 do_verify=1 00:12:23.735 verify=crc32c-intel 00:12:23.735 [job0] 00:12:23.735 filename=/dev/nvme0n1 00:12:23.735 [job1] 00:12:23.735 filename=/dev/nvme0n2 00:12:23.735 [job2] 00:12:23.735 filename=/dev/nvme0n3 00:12:23.735 [job3] 00:12:23.735 filename=/dev/nvme0n4 00:12:23.735 Could not set queue depth (nvme0n1) 00:12:23.735 Could not set queue depth (nvme0n2) 00:12:23.735 Could not set queue depth (nvme0n3) 00:12:23.735 Could not set queue depth (nvme0n4) 00:12:23.994 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.995 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.995 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.995 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.995 fio-3.35 00:12:23.995 Starting 4 threads 00:12:25.372 00:12:25.372 job0: (groupid=0, jobs=1): err= 0: pid=76902: Mon Jul 15 15:35:20 2024 00:12:25.372 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:25.372 slat (nsec): min=13219, max=77233, avg=16804.31, stdev=4936.22 00:12:25.372 clat (usec): min=132, max=282, avg=159.30, stdev=15.58 00:12:25.372 lat (usec): min=145, max=305, avg=176.11, stdev=16.41 00:12:25.373 clat percentiles (usec): 00:12:25.373 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:12:25.373 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:12:25.373 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 188], 00:12:25.373 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 241], 99.95th=[ 269], 00:12:25.373 | 99.99th=[ 281] 00:12:25.373 write: IOPS=3182, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec); 0 zone resets 00:12:25.373 slat (usec): min=18, max=137, avg=24.13, stdev= 6.78 00:12:25.373 clat (usec): min=91, max=352, avg=116.38, stdev=13.90 00:12:25.373 lat (usec): min=113, max=376, avg=140.51, stdev=16.02 00:12:25.373 clat percentiles (usec): 00:12:25.373 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 106], 00:12:25.373 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 117], 00:12:25.373 | 70.00th=[ 121], 80.00th=[ 127], 90.00th=[ 135], 95.00th=[ 143], 00:12:25.373 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 239], 00:12:25.373 | 99.99th=[ 355] 00:12:25.373 bw ( KiB/s): min=12488, max=12488, per=40.10%, avg=12488.00, stdev= 0.00, samples=1 00:12:25.373 iops : min= 3122, max= 3122, avg=3122.00, stdev= 0.00, samples=1 00:12:25.373 lat (usec) : 100=2.65%, 250=97.30%, 500=0.05% 00:12:25.373 cpu : usr=3.10%, sys=8.90%, ctx=6258, majf=0, minf=9 00:12:25.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.373 issued rwts: total=3072,3186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.373 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.373 job1: (groupid=0, jobs=1): err= 0: pid=76903: Mon Jul 15 15:35:20 2024 00:12:25.373 read: IOPS=1212, BW=4851KiB/s (4968kB/s)(4856KiB/1001msec) 00:12:25.373 slat (usec): min=11, max=126, avg=18.45, stdev= 6.68 00:12:25.373 clat (usec): min=130, max=40912, avg=429.50, stdev=1166.56 00:12:25.373 lat (usec): min=155, max=40938, avg=447.95, stdev=1166.79 00:12:25.373 clat percentiles (usec): 00:12:25.373 | 1.00th=[ 255], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 351], 00:12:25.373 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:12:25.373 | 70.00th=[ 400], 80.00th=[ 457], 90.00th=[ 490], 95.00th=[ 506], 00:12:25.373 | 99.00th=[ 627], 99.50th=[ 873], 99.90th=[ 2442], 99.95th=[41157], 00:12:25.373 | 99.99th=[41157] 00:12:25.373 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:25.373 slat (nsec): min=13891, max=98767, avg=29659.80, stdev=8005.01 00:12:25.373 clat (usec): min=101, max=1011, avg=263.44, stdev=67.74 00:12:25.373 lat (usec): min=126, max=1029, avg=293.10, stdev=67.76 00:12:25.373 clat percentiles (usec): 00:12:25.373 | 1.00th=[ 105], 5.00th=[ 115], 10.00th=[ 143], 20.00th=[ 237], 00:12:25.373 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:12:25.373 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 347], 95.00th=[ 379], 00:12:25.373 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 594], 99.95th=[ 1012], 00:12:25.373 | 99.99th=[ 1012] 00:12:25.373 bw ( KiB/s): min= 6388, max= 6388, per=20.51%, avg=6388.00, stdev= 0.00, samples=1 00:12:25.373 iops : min= 1597, max= 1597, avg=1597.00, stdev= 0.00, samples=1 00:12:25.373 lat (usec) : 250=16.80%, 500=80.04%, 750=2.84%, 1000=0.18% 00:12:25.373 lat (msec) : 2=0.07%, 4=0.04%, 50=0.04% 00:12:25.373 cpu : usr=1.30%, sys=5.60%, ctx=2751, majf=0, minf=8 00:12:25.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.373 issued rwts: total=1214,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.373 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.373 job2: (groupid=0, jobs=1): err= 0: pid=76904: Mon Jul 15 15:35:20 2024 00:12:25.373 read: IOPS=1212, BW=4851KiB/s (4968kB/s)(4856KiB/1001msec) 00:12:25.373 slat (nsec): min=11088, max=68605, avg=18155.66, stdev=5810.75 00:12:25.373 clat (usec): min=188, max=40860, avg=429.52, stdev=1164.87 00:12:25.373 lat (usec): min=205, max=40877, avg=447.68, stdev=1164.90 00:12:25.373 clat percentiles (usec): 00:12:25.373 | 1.00th=[ 265], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 351], 00:12:25.373 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 388], 00:12:25.373 | 70.00th=[ 400], 80.00th=[ 449], 90.00th=[ 486], 95.00th=[ 506], 00:12:25.373 | 99.00th=[ 668], 99.50th=[ 791], 99.90th=[ 2442], 99.95th=[40633], 00:12:25.373 | 99.99th=[40633] 00:12:25.373 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:25.373 slat (usec): min=13, max=131, avg=24.88, stdev= 8.81 00:12:25.373 clat (usec): min=111, max=965, avg=268.90, stdev=62.55 00:12:25.373 lat (usec): min=136, max=990, avg=293.78, stdev=62.23 00:12:25.373 clat percentiles (usec): 00:12:25.373 | 1.00th=[ 118], 5.00th=[ 126], 10.00th=[ 200], 20.00th=[ 243], 00:12:25.373 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 285], 00:12:25.373 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 334], 95.00th=[ 367], 00:12:25.373 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 619], 99.95th=[ 963], 00:12:25.373 | 99.99th=[ 963] 00:12:25.373 bw ( KiB/s): min= 6392, max= 6392, per=20.52%, avg=6392.00, stdev= 0.00, samples=1 00:12:25.373 iops : min= 1598, max= 1598, avg=1598.00, stdev= 0.00, samples=1 00:12:25.373 lat (usec) : 250=13.96%, 500=83.09%, 750=2.62%, 1000=0.25% 00:12:25.373 lat (msec) : 4=0.04%, 50=0.04% 00:12:25.373 cpu : usr=1.10%, sys=5.00%, ctx=2750, majf=0, minf=7 00:12:25.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.373 issued rwts: total=1214,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.373 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.373 job3: (groupid=0, jobs=1): err= 0: pid=76905: Mon Jul 15 15:35:20 2024 00:12:25.373 read: IOPS=1373, BW=5492KiB/s (5624kB/s)(5492KiB/1000msec) 00:12:25.373 slat (nsec): min=15496, max=68771, avg=27655.73, stdev=8781.11 00:12:25.373 clat (usec): min=138, max=1461, avg=368.88, stdev=117.06 00:12:25.373 lat (usec): min=154, max=1487, avg=396.54, stdev=121.05 00:12:25.373 clat percentiles (usec): 00:12:25.373 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 174], 20.00th=[ 326], 00:12:25.373 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 375], 00:12:25.373 | 70.00th=[ 404], 80.00th=[ 486], 90.00th=[ 519], 95.00th=[ 529], 00:12:25.373 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 1336], 99.95th=[ 1467], 00:12:25.373 | 99.99th=[ 1467] 00:12:25.373 write: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec); 0 zone resets 00:12:25.373 slat (usec): min=23, max=136, avg=36.76, stdev= 9.08 00:12:25.373 clat (usec): min=110, max=780, avg=254.19, stdev=51.02 00:12:25.373 lat (usec): min=143, max=808, avg=290.96, stdev=53.09 00:12:25.373 clat percentiles (usec): 00:12:25.373 | 1.00th=[ 133], 5.00th=[ 192], 10.00th=[ 202], 20.00th=[ 221], 00:12:25.373 | 30.00th=[ 233], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 262], 00:12:25.373 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 338], 00:12:25.373 | 99.00th=[ 412], 99.50th=[ 494], 99.90th=[ 635], 99.95th=[ 783], 00:12:25.373 | 99.99th=[ 783] 00:12:25.373 bw ( KiB/s): min= 8192, max= 8192, per=26.30%, avg=8192.00, stdev= 0.00, samples=1 00:12:25.373 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:25.373 lat (usec) : 250=32.38%, 500=59.37%, 750=8.15%, 1000=0.03% 00:12:25.373 lat (msec) : 2=0.07% 00:12:25.373 cpu : usr=1.80%, sys=7.10%, ctx=2909, majf=0, minf=11 00:12:25.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.373 issued rwts: total=1373,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.373 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.373 00:12:25.373 Run status group 0 (all jobs): 00:12:25.373 READ: bw=26.8MiB/s (28.1MB/s), 4851KiB/s-12.0MiB/s (4968kB/s-12.6MB/s), io=26.8MiB (28.2MB), run=1000-1001msec 00:12:25.373 WRITE: bw=30.4MiB/s (31.9MB/s), 6138KiB/s-12.4MiB/s (6285kB/s-13.0MB/s), io=30.4MiB (31.9MB), run=1000-1001msec 00:12:25.373 00:12:25.373 Disk stats (read/write): 00:12:25.373 nvme0n1: ios=2610/2845, merge=0/0, ticks=449/363, in_queue=812, util=88.08% 00:12:25.373 nvme0n2: ios=1065/1309, merge=0/0, ticks=454/359, in_queue=813, util=88.25% 00:12:25.373 nvme0n3: ios=1024/1311, merge=0/0, ticks=445/333, in_queue=778, util=89.15% 00:12:25.373 nvme0n4: ios=1066/1536, merge=0/0, ticks=389/415, in_queue=804, util=89.71% 00:12:25.373 15:35:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:25.373 [global] 00:12:25.373 thread=1 00:12:25.373 invalidate=1 00:12:25.373 rw=randwrite 00:12:25.373 time_based=1 00:12:25.373 runtime=1 00:12:25.373 ioengine=libaio 00:12:25.373 direct=1 00:12:25.373 bs=4096 00:12:25.373 iodepth=1 00:12:25.373 norandommap=0 00:12:25.373 numjobs=1 00:12:25.373 00:12:25.373 verify_dump=1 00:12:25.373 verify_backlog=512 00:12:25.373 verify_state_save=0 00:12:25.373 do_verify=1 00:12:25.373 verify=crc32c-intel 00:12:25.373 [job0] 00:12:25.373 filename=/dev/nvme0n1 00:12:25.373 [job1] 00:12:25.373 filename=/dev/nvme0n2 00:12:25.373 [job2] 00:12:25.373 filename=/dev/nvme0n3 00:12:25.373 [job3] 00:12:25.373 filename=/dev/nvme0n4 00:12:25.373 Could not set queue depth (nvme0n1) 00:12:25.373 Could not set queue depth (nvme0n2) 00:12:25.373 Could not set queue depth (nvme0n3) 00:12:25.373 Could not set queue depth (nvme0n4) 00:12:25.373 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:25.373 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:25.373 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:25.373 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:25.373 fio-3.35 00:12:25.373 Starting 4 threads 00:12:26.774 00:12:26.774 job0: (groupid=0, jobs=1): err= 0: pid=76964: Mon Jul 15 15:35:21 2024 00:12:26.774 read: IOPS=1882, BW=7528KiB/s (7709kB/s)(7536KiB/1001msec) 00:12:26.774 slat (nsec): min=10808, max=54898, avg=15521.97, stdev=4594.72 00:12:26.774 clat (usec): min=156, max=1600, avg=270.55, stdev=47.19 00:12:26.774 lat (usec): min=178, max=1622, avg=286.07, stdev=47.14 00:12:26.774 clat percentiles (usec): 00:12:26.774 | 1.00th=[ 176], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:12:26.774 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:12:26.774 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 314], 00:12:26.774 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 791], 99.95th=[ 1598], 00:12:26.774 | 99.99th=[ 1598] 00:12:26.774 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:26.774 slat (usec): min=11, max=133, avg=24.22, stdev= 8.21 00:12:26.774 clat (usec): min=22, max=7736, avg=197.30, stdev=202.43 00:12:26.774 lat (usec): min=127, max=7759, avg=221.52, stdev=202.04 00:12:26.774 clat percentiles (usec): 00:12:26.774 | 1.00th=[ 111], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 133], 00:12:26.774 | 30.00th=[ 161], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 206], 00:12:26.774 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 253], 00:12:26.774 | 99.00th=[ 310], 99.50th=[ 445], 99.90th=[ 2606], 99.95th=[ 3556], 00:12:26.774 | 99.99th=[ 7767] 00:12:26.774 bw ( KiB/s): min= 8192, max= 8192, per=20.51%, avg=8192.00, stdev= 0.00, samples=1 00:12:26.774 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:26.774 lat (usec) : 50=0.05%, 100=0.05%, 250=57.40%, 500=42.04%, 750=0.23% 00:12:26.774 lat (usec) : 1000=0.10% 00:12:26.774 lat (msec) : 2=0.03%, 4=0.08%, 10=0.03% 00:12:26.774 cpu : usr=1.60%, sys=6.10%, ctx=3967, majf=0, minf=13 00:12:26.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:26.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.774 issued rwts: total=1884,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:26.774 job1: (groupid=0, jobs=1): err= 0: pid=76965: Mon Jul 15 15:35:21 2024 00:12:26.774 read: IOPS=1984, BW=7936KiB/s (8127kB/s)(7944KiB/1001msec) 00:12:26.774 slat (nsec): min=10692, max=87860, avg=15587.31, stdev=4984.84 00:12:26.774 clat (usec): min=141, max=681, avg=263.86, stdev=41.74 00:12:26.774 lat (usec): min=157, max=701, avg=279.45, stdev=41.57 00:12:26.774 clat percentiles (usec): 00:12:26.774 | 1.00th=[ 153], 5.00th=[ 172], 10.00th=[ 237], 20.00th=[ 249], 00:12:26.774 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:12:26.774 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:12:26.774 | 99.00th=[ 351], 99.50th=[ 388], 99.90th=[ 676], 99.95th=[ 685], 00:12:26.774 | 99.99th=[ 685] 00:12:26.774 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:26.774 slat (nsec): min=10767, max=86659, avg=23361.53, stdev=6404.76 00:12:26.774 clat (usec): min=100, max=851, avg=190.30, stdev=52.71 00:12:26.774 lat (usec): min=125, max=871, avg=213.66, stdev=51.52 00:12:26.774 clat percentiles (usec): 00:12:26.774 | 1.00th=[ 111], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 131], 00:12:26.774 | 30.00th=[ 163], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 206], 00:12:26.774 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 258], 00:12:26.774 | 99.00th=[ 293], 99.50th=[ 445], 99.90th=[ 578], 99.95th=[ 627], 00:12:26.774 | 99.99th=[ 848] 00:12:26.774 bw ( KiB/s): min= 8776, max= 8776, per=21.97%, avg=8776.00, stdev= 0.00, samples=1 00:12:26.774 iops : min= 2194, max= 2194, avg=2194.00, stdev= 0.00, samples=1 00:12:26.774 lat (usec) : 250=58.21%, 500=41.40%, 750=0.37%, 1000=0.02% 00:12:26.774 cpu : usr=1.40%, sys=6.40%, ctx=4048, majf=0, minf=11 00:12:26.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:26.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.774 issued rwts: total=1986,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:26.774 job2: (groupid=0, jobs=1): err= 0: pid=76966: Mon Jul 15 15:35:21 2024 00:12:26.774 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:26.774 slat (nsec): min=13291, max=49700, avg=16199.77, stdev=3497.43 00:12:26.774 clat (usec): min=156, max=285, avg=186.01, stdev=16.34 00:12:26.774 lat (usec): min=170, max=300, avg=202.21, stdev=16.56 00:12:26.774 clat percentiles (usec): 00:12:26.774 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:12:26.774 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:12:26.774 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 217], 00:12:26.774 | 99.00th=[ 241], 99.50th=[ 258], 99.90th=[ 281], 99.95th=[ 281], 00:12:26.774 | 99.99th=[ 285] 00:12:26.774 write: IOPS=2824, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec); 0 zone resets 00:12:26.774 slat (nsec): min=19233, max=77223, avg=23378.30, stdev=4981.77 00:12:26.774 clat (usec): min=113, max=1535, avg=143.91, stdev=31.17 00:12:26.774 lat (usec): min=133, max=1568, avg=167.28, stdev=31.64 00:12:26.774 clat percentiles (usec): 00:12:26.774 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 131], 00:12:26.774 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:12:26.774 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 174], 00:12:26.774 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 289], 99.95th=[ 510], 00:12:26.774 | 99.99th=[ 1532] 00:12:26.774 bw ( KiB/s): min=12288, max=12288, per=30.77%, avg=12288.00, stdev= 0.00, samples=1 00:12:26.774 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:26.774 lat (usec) : 250=99.57%, 500=0.39%, 750=0.02% 00:12:26.774 lat (msec) : 2=0.02% 00:12:26.774 cpu : usr=2.10%, sys=7.90%, ctx=5387, majf=0, minf=7 00:12:26.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:26.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.774 issued rwts: total=2560,2827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:26.774 job3: (groupid=0, jobs=1): err= 0: pid=76967: Mon Jul 15 15:35:21 2024 00:12:26.774 read: IOPS=2594, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:12:26.774 slat (nsec): min=12682, max=60113, avg=16076.60, stdev=4238.26 00:12:26.774 clat (usec): min=145, max=821, avg=175.77, stdev=20.37 00:12:26.774 lat (usec): min=159, max=858, avg=191.85, stdev=21.01 00:12:26.774 clat percentiles (usec): 00:12:26.774 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:12:26.774 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:12:26.774 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 200], 00:12:26.774 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 392], 99.95th=[ 510], 00:12:26.774 | 99.99th=[ 824] 00:12:26.774 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:26.774 slat (nsec): min=19317, max=84002, avg=23065.51, stdev=5704.16 00:12:26.774 clat (usec): min=107, max=1708, avg=137.12, stdev=33.35 00:12:26.774 lat (usec): min=127, max=1730, avg=160.18, stdev=33.93 00:12:26.774 clat percentiles (usec): 00:12:26.774 | 1.00th=[ 115], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 126], 00:12:26.774 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:12:26.774 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:12:26.774 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 231], 99.95th=[ 685], 00:12:26.774 | 99.99th=[ 1713] 00:12:26.774 bw ( KiB/s): min=12288, max=12288, per=30.77%, avg=12288.00, stdev= 0.00, samples=1 00:12:26.774 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:26.774 lat (usec) : 250=99.88%, 500=0.05%, 750=0.04%, 1000=0.02% 00:12:26.774 lat (msec) : 2=0.02% 00:12:26.774 cpu : usr=2.30%, sys=8.20%, ctx=5670, majf=0, minf=14 00:12:26.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:26.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.775 issued rwts: total=2597,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.775 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:26.775 00:12:26.775 Run status group 0 (all jobs): 00:12:26.775 READ: bw=35.2MiB/s (36.9MB/s), 7528KiB/s-10.1MiB/s (7709kB/s-10.6MB/s), io=35.3MiB (37.0MB), run=1001-1001msec 00:12:26.775 WRITE: bw=39.0MiB/s (40.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.0MiB (40.9MB), run=1001-1001msec 00:12:26.775 00:12:26.775 Disk stats (read/write): 00:12:26.775 nvme0n1: ios=1585/1770, merge=0/0, ticks=450/347, in_queue=797, util=84.82% 00:12:26.775 nvme0n2: ios=1536/1911, merge=0/0, ticks=398/380, in_queue=778, util=85.52% 00:12:26.775 nvme0n3: ios=2048/2475, merge=0/0, ticks=387/380, in_queue=767, util=88.60% 00:12:26.775 nvme0n4: ios=2178/2560, merge=0/0, ticks=390/368, in_queue=758, util=89.45% 00:12:26.775 15:35:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:26.775 [global] 00:12:26.775 thread=1 00:12:26.775 invalidate=1 00:12:26.775 rw=write 00:12:26.775 time_based=1 00:12:26.775 runtime=1 00:12:26.775 ioengine=libaio 00:12:26.775 direct=1 00:12:26.775 bs=4096 00:12:26.775 iodepth=128 00:12:26.775 norandommap=0 00:12:26.775 numjobs=1 00:12:26.775 00:12:26.775 verify_dump=1 00:12:26.775 verify_backlog=512 00:12:26.775 verify_state_save=0 00:12:26.775 do_verify=1 00:12:26.775 verify=crc32c-intel 00:12:26.775 [job0] 00:12:26.775 filename=/dev/nvme0n1 00:12:26.775 [job1] 00:12:26.775 filename=/dev/nvme0n2 00:12:26.775 [job2] 00:12:26.775 filename=/dev/nvme0n3 00:12:26.775 [job3] 00:12:26.775 filename=/dev/nvme0n4 00:12:26.775 Could not set queue depth (nvme0n1) 00:12:26.775 Could not set queue depth (nvme0n2) 00:12:26.775 Could not set queue depth (nvme0n3) 00:12:26.775 Could not set queue depth (nvme0n4) 00:12:26.775 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:26.775 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:26.775 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:26.775 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:26.775 fio-3.35 00:12:26.775 Starting 4 threads 00:12:28.151 00:12:28.151 job0: (groupid=0, jobs=1): err= 0: pid=77024: Mon Jul 15 15:35:22 2024 00:12:28.151 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(9.93MiB/1004msec) 00:12:28.151 slat (usec): min=5, max=8424, avg=194.95, stdev=908.70 00:12:28.151 clat (usec): min=647, max=31750, avg=24505.59, stdev=3069.59 00:12:28.151 lat (usec): min=5955, max=32081, avg=24700.54, stdev=2973.24 00:12:28.151 clat percentiles (usec): 00:12:28.151 | 1.00th=[ 6390], 5.00th=[19530], 10.00th=[21365], 20.00th=[24249], 00:12:28.151 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:12:28.151 | 70.00th=[25297], 80.00th=[25822], 90.00th=[27395], 95.00th=[27919], 00:12:28.151 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31851], 99.95th=[31851], 00:12:28.151 | 99.99th=[31851] 00:12:28.151 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:12:28.151 slat (usec): min=14, max=5559, avg=188.73, stdev=817.85 00:12:28.151 clat (usec): min=15260, max=36897, avg=25092.01, stdev=4412.66 00:12:28.151 lat (usec): min=15309, max=36923, avg=25280.74, stdev=4383.13 00:12:28.151 clat percentiles (usec): 00:12:28.151 | 1.00th=[17957], 5.00th=[19268], 10.00th=[20579], 20.00th=[22676], 00:12:28.151 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:12:28.151 | 70.00th=[26084], 80.00th=[27919], 90.00th=[32637], 95.00th=[35914], 00:12:28.151 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:12:28.151 | 99.99th=[36963] 00:12:28.151 bw ( KiB/s): min= 8440, max=12064, per=16.00%, avg=10252.00, stdev=2562.55, samples=2 00:12:28.151 iops : min= 2110, max= 3016, avg=2563.00, stdev=640.64, samples=2 00:12:28.151 lat (usec) : 750=0.02% 00:12:28.151 lat (msec) : 10=0.63%, 20=6.13%, 50=93.22% 00:12:28.151 cpu : usr=3.19%, sys=7.78%, ctx=261, majf=0, minf=10 00:12:28.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:28.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:28.151 issued rwts: total=2543,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:28.151 job1: (groupid=0, jobs=1): err= 0: pid=77025: Mon Jul 15 15:35:22 2024 00:12:28.151 read: IOPS=5539, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1004msec) 00:12:28.151 slat (usec): min=4, max=3779, avg=85.57, stdev=448.23 00:12:28.151 clat (usec): min=3221, max=15513, avg=11558.57, stdev=1113.16 00:12:28.151 lat (usec): min=3234, max=16277, avg=11644.15, stdev=1154.50 00:12:28.151 clat percentiles (usec): 00:12:28.151 | 1.00th=[ 7832], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11338], 00:12:28.151 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:12:28.151 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12649], 00:12:28.151 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15008], 99.95th=[15270], 00:12:28.151 | 99.99th=[15533] 00:12:28.151 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:12:28.151 slat (usec): min=10, max=3602, avg=85.41, stdev=411.39 00:12:28.151 clat (usec): min=7874, max=14432, avg=11123.09, stdev=1266.64 00:12:28.151 lat (usec): min=7897, max=14460, avg=11208.50, stdev=1235.15 00:12:28.151 clat percentiles (usec): 00:12:28.151 | 1.00th=[ 8455], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:12:28.151 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:12:28.151 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12518], 00:12:28.151 | 99.00th=[13042], 99.50th=[13042], 99.90th=[13435], 99.95th=[13566], 00:12:28.151 | 99.99th=[14484] 00:12:28.151 bw ( KiB/s): min=21592, max=23464, per=35.16%, avg=22528.00, stdev=1323.70, samples=2 00:12:28.151 iops : min= 5398, max= 5866, avg=5632.00, stdev=330.93, samples=2 00:12:28.151 lat (msec) : 4=0.38%, 10=15.51%, 20=84.12% 00:12:28.151 cpu : usr=4.39%, sys=15.05%, ctx=384, majf=0, minf=9 00:12:28.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:28.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:28.151 issued rwts: total=5562,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:28.151 job2: (groupid=0, jobs=1): err= 0: pid=77026: Mon Jul 15 15:35:22 2024 00:12:28.151 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:12:28.151 slat (usec): min=6, max=6233, avg=184.98, stdev=935.54 00:12:28.151 clat (usec): min=16266, max=27778, avg=24319.79, stdev=1584.87 00:12:28.151 lat (usec): min=19916, max=27813, avg=24504.78, stdev=1298.13 00:12:28.151 clat percentiles (usec): 00:12:28.151 | 1.00th=[19006], 5.00th=[21365], 10.00th=[21890], 20.00th=[23725], 00:12:28.151 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:12:28.151 | 70.00th=[25035], 80.00th=[25035], 90.00th=[25822], 95.00th=[26346], 00:12:28.151 | 99.00th=[27132], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:12:28.151 | 99.99th=[27657] 00:12:28.151 write: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1006msec); 0 zone resets 00:12:28.151 slat (usec): min=17, max=5997, avg=172.83, stdev=815.17 00:12:28.151 clat (usec): min=1903, max=25395, avg=22032.42, stdev=2957.74 00:12:28.151 lat (usec): min=6635, max=25422, avg=22205.25, stdev=2852.12 00:12:28.151 clat percentiles (usec): 00:12:28.151 | 1.00th=[ 7570], 5.00th=[17957], 10.00th=[18220], 20.00th=[18744], 00:12:28.151 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23462], 60.00th=[23462], 00:12:28.151 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24511], 00:12:28.151 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:12:28.151 | 99.99th=[25297] 00:12:28.151 bw ( KiB/s): min=10240, max=12064, per=17.41%, avg=11152.00, stdev=1289.76, samples=2 00:12:28.151 iops : min= 2560, max= 3016, avg=2788.00, stdev=322.44, samples=2 00:12:28.151 lat (msec) : 2=0.02%, 10=0.58%, 20=13.37%, 50=86.02% 00:12:28.151 cpu : usr=2.39%, sys=9.35%, ctx=189, majf=0, minf=11 00:12:28.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:28.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:28.151 issued rwts: total=2560,2913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:28.151 job3: (groupid=0, jobs=1): err= 0: pid=77027: Mon Jul 15 15:35:22 2024 00:12:28.151 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:28.151 slat (usec): min=8, max=4215, avg=100.31, stdev=536.90 00:12:28.151 clat (usec): min=10157, max=17980, avg=13591.45, stdev=1015.67 00:12:28.151 lat (usec): min=10172, max=18610, avg=13691.76, stdev=1084.35 00:12:28.151 clat percentiles (usec): 00:12:28.151 | 1.00th=[10552], 5.00th=[11469], 10.00th=[12780], 20.00th=[13173], 00:12:28.151 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:12:28.151 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14746], 95.00th=[15139], 00:12:28.151 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:12:28.151 | 99.99th=[17957] 00:12:28.151 write: IOPS=5002, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1001msec); 0 zone resets 00:12:28.151 slat (usec): min=11, max=3951, avg=99.67, stdev=493.31 00:12:28.151 clat (usec): min=693, max=16932, avg=12754.41, stdev=1709.11 00:12:28.151 lat (usec): min=717, max=16980, avg=12854.07, stdev=1679.25 00:12:28.151 clat percentiles (usec): 00:12:28.151 | 1.00th=[ 5276], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10945], 00:12:28.151 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:12:28.151 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14222], 95.00th=[14484], 00:12:28.151 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16450], 99.95th=[16712], 00:12:28.151 | 99.99th=[16909] 00:12:28.151 bw ( KiB/s): min=18520, max=20521, per=30.47%, avg=19520.50, stdev=1414.92, samples=2 00:12:28.151 iops : min= 4630, max= 5130, avg=4880.00, stdev=353.55, samples=2 00:12:28.151 lat (usec) : 750=0.03%, 1000=0.03% 00:12:28.151 lat (msec) : 4=0.19%, 10=2.72%, 20=97.03% 00:12:28.151 cpu : usr=4.20%, sys=13.60%, ctx=333, majf=0, minf=7 00:12:28.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:28.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:28.151 issued rwts: total=4608,5008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:28.151 00:12:28.151 Run status group 0 (all jobs): 00:12:28.151 READ: bw=59.3MiB/s (62.2MB/s), 9.89MiB/s-21.6MiB/s (10.4MB/s-22.7MB/s), io=59.7MiB (62.6MB), run=1001-1006msec 00:12:28.151 WRITE: bw=62.6MiB/s (65.6MB/s), 9.96MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=62.9MiB (66.0MB), run=1001-1006msec 00:12:28.151 00:12:28.151 Disk stats (read/write): 00:12:28.151 nvme0n1: ios=2097/2399, merge=0/0, ticks=12072/13332, in_queue=25404, util=88.15% 00:12:28.151 nvme0n2: ios=4613/5028, merge=0/0, ticks=15518/15448, in_queue=30966, util=88.19% 00:12:28.151 nvme0n3: ios=2112/2560, merge=0/0, ticks=12201/12847, in_queue=25048, util=89.36% 00:12:28.151 nvme0n4: ios=4096/4172, merge=0/0, ticks=16607/15040, in_queue=31647, util=89.82% 00:12:28.151 15:35:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:28.151 [global] 00:12:28.151 thread=1 00:12:28.151 invalidate=1 00:12:28.151 rw=randwrite 00:12:28.151 time_based=1 00:12:28.151 runtime=1 00:12:28.151 ioengine=libaio 00:12:28.151 direct=1 00:12:28.151 bs=4096 00:12:28.151 iodepth=128 00:12:28.151 norandommap=0 00:12:28.151 numjobs=1 00:12:28.152 00:12:28.152 verify_dump=1 00:12:28.152 verify_backlog=512 00:12:28.152 verify_state_save=0 00:12:28.152 do_verify=1 00:12:28.152 verify=crc32c-intel 00:12:28.152 [job0] 00:12:28.152 filename=/dev/nvme0n1 00:12:28.152 [job1] 00:12:28.152 filename=/dev/nvme0n2 00:12:28.152 [job2] 00:12:28.152 filename=/dev/nvme0n3 00:12:28.152 [job3] 00:12:28.152 filename=/dev/nvme0n4 00:12:28.152 Could not set queue depth (nvme0n1) 00:12:28.152 Could not set queue depth (nvme0n2) 00:12:28.152 Could not set queue depth (nvme0n3) 00:12:28.152 Could not set queue depth (nvme0n4) 00:12:28.152 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:28.152 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:28.152 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:28.152 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:28.152 fio-3.35 00:12:28.152 Starting 4 threads 00:12:29.529 00:12:29.529 job0: (groupid=0, jobs=1): err= 0: pid=77081: Mon Jul 15 15:35:24 2024 00:12:29.529 read: IOPS=4710, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:12:29.529 slat (usec): min=4, max=12039, avg=107.89, stdev=658.33 00:12:29.529 clat (usec): min=1379, max=25667, avg=13734.59, stdev=3531.00 00:12:29.529 lat (usec): min=4526, max=26094, avg=13842.48, stdev=3560.40 00:12:29.529 clat percentiles (usec): 00:12:29.529 | 1.00th=[ 6325], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:12:29.529 | 30.00th=[11863], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:12:29.529 | 70.00th=[14353], 80.00th=[15926], 90.00th=[19006], 95.00th=[21627], 00:12:29.529 | 99.00th=[24511], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:12:29.529 | 99.99th=[25560] 00:12:29.529 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:12:29.529 slat (usec): min=4, max=10506, avg=88.16, stdev=471.39 00:12:29.529 clat (usec): min=3603, max=25582, avg=12166.09, stdev=2500.40 00:12:29.529 lat (usec): min=3623, max=25614, avg=12254.26, stdev=2545.75 00:12:29.529 clat percentiles (usec): 00:12:29.529 | 1.00th=[ 5145], 5.00th=[ 6718], 10.00th=[ 8356], 20.00th=[10421], 00:12:29.529 | 30.00th=[11469], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:12:29.529 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14484], 95.00th=[14484], 00:12:29.529 | 99.00th=[14746], 99.50th=[16450], 99.90th=[23987], 99.95th=[25297], 00:12:29.529 | 99.99th=[25560] 00:12:29.529 bw ( KiB/s): min=20480, max=20505, per=28.38%, avg=20492.50, stdev=17.68, samples=2 00:12:29.529 iops : min= 5120, max= 5126, avg=5123.00, stdev= 4.24, samples=2 00:12:29.529 lat (msec) : 2=0.01%, 4=0.06%, 10=11.84%, 20=84.14%, 50=3.95% 00:12:29.529 cpu : usr=5.38%, sys=11.75%, ctx=636, majf=0, minf=5 00:12:29.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:29.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:29.529 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:29.529 job1: (groupid=0, jobs=1): err= 0: pid=77082: Mon Jul 15 15:35:24 2024 00:12:29.529 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:12:29.529 slat (usec): min=3, max=11652, avg=121.38, stdev=743.52 00:12:29.529 clat (usec): min=4721, max=33481, avg=15623.78, stdev=5089.28 00:12:29.529 lat (usec): min=4730, max=34179, avg=15745.16, stdev=5130.28 00:12:29.529 clat percentiles (usec): 00:12:29.530 | 1.00th=[ 5866], 5.00th=[10159], 10.00th=[10683], 20.00th=[11994], 00:12:29.530 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[14877], 00:12:29.530 | 70.00th=[16581], 80.00th=[21103], 90.00th=[23725], 95.00th=[25297], 00:12:29.530 | 99.00th=[28967], 99.50th=[29754], 99.90th=[33424], 99.95th=[33424], 00:12:29.530 | 99.99th=[33424] 00:12:29.530 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(17.4MiB/1011msec); 0 zone resets 00:12:29.530 slat (usec): min=5, max=10375, avg=105.64, stdev=492.16 00:12:29.530 clat (usec): min=4233, max=36926, avg=14404.92, stdev=5400.69 00:12:29.530 lat (usec): min=4256, max=36934, avg=14510.56, stdev=5447.12 00:12:29.530 clat percentiles (usec): 00:12:29.530 | 1.00th=[ 5276], 5.00th=[ 6783], 10.00th=[ 8160], 20.00th=[11600], 00:12:29.530 | 30.00th=[12649], 40.00th=[13435], 50.00th=[13566], 60.00th=[13829], 00:12:29.530 | 70.00th=[13960], 80.00th=[14484], 90.00th=[24249], 95.00th=[26608], 00:12:29.530 | 99.00th=[29754], 99.50th=[31327], 99.90th=[32900], 99.95th=[36439], 00:12:29.530 | 99.99th=[36963] 00:12:29.530 bw ( KiB/s): min=14056, max=20521, per=23.94%, avg=17288.50, stdev=4571.45, samples=2 00:12:29.530 iops : min= 3514, max= 5130, avg=4322.00, stdev=1142.68, samples=2 00:12:29.530 lat (msec) : 10=9.53%, 20=72.04%, 50=18.43% 00:12:29.530 cpu : usr=4.95%, sys=9.80%, ctx=755, majf=0, minf=13 00:12:29.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:29.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:29.530 issued rwts: total=4096,4445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:29.530 job2: (groupid=0, jobs=1): err= 0: pid=77083: Mon Jul 15 15:35:24 2024 00:12:29.530 read: IOPS=4180, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1010msec) 00:12:29.530 slat (usec): min=5, max=13548, avg=125.38, stdev=778.47 00:12:29.530 clat (usec): min=1216, max=28815, avg=15502.64, stdev=4201.20 00:12:29.530 lat (usec): min=5879, max=28831, avg=15628.02, stdev=4231.44 00:12:29.530 clat percentiles (usec): 00:12:29.530 | 1.00th=[ 6521], 5.00th=[10814], 10.00th=[11600], 20.00th=[12125], 00:12:29.530 | 30.00th=[12780], 40.00th=[14091], 50.00th=[14353], 60.00th=[14877], 00:12:29.530 | 70.00th=[17171], 80.00th=[18220], 90.00th=[21890], 95.00th=[24511], 00:12:29.530 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28705], 99.95th=[28705], 00:12:29.530 | 99.99th=[28705] 00:12:29.530 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:12:29.530 slat (usec): min=4, max=11663, avg=95.72, stdev=403.44 00:12:29.530 clat (usec): min=4912, max=28772, avg=13558.90, stdev=2864.33 00:12:29.530 lat (usec): min=4932, max=28782, avg=13654.62, stdev=2894.11 00:12:29.530 clat percentiles (usec): 00:12:29.530 | 1.00th=[ 5604], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[11731], 00:12:29.530 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14484], 60.00th=[14877], 00:12:29.530 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15795], 95.00th=[15926], 00:12:29.530 | 99.00th=[16188], 99.50th=[16319], 99.90th=[28181], 99.95th=[28443], 00:12:29.530 | 99.99th=[28705] 00:12:29.530 bw ( KiB/s): min=17490, max=19392, per=25.54%, avg=18441.00, stdev=1344.92, samples=2 00:12:29.530 iops : min= 4372, max= 4848, avg=4610.00, stdev=336.58, samples=2 00:12:29.530 lat (msec) : 2=0.01%, 10=8.29%, 20=84.65%, 50=7.04% 00:12:29.530 cpu : usr=4.86%, sys=10.80%, ctx=662, majf=0, minf=11 00:12:29.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:29.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:29.530 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:29.530 job3: (groupid=0, jobs=1): err= 0: pid=77084: Mon Jul 15 15:35:24 2024 00:12:29.530 read: IOPS=3600, BW=14.1MiB/s (14.7MB/s)(14.2MiB/1012msec) 00:12:29.530 slat (usec): min=4, max=13364, avg=138.58, stdev=874.17 00:12:29.530 clat (usec): min=5493, max=39130, avg=17106.65, stdev=5142.82 00:12:29.530 lat (usec): min=5508, max=39164, avg=17245.23, stdev=5192.45 00:12:29.530 clat percentiles (usec): 00:12:29.530 | 1.00th=[ 6456], 5.00th=[11469], 10.00th=[11731], 20.00th=[13042], 00:12:29.530 | 30.00th=[13960], 40.00th=[14484], 50.00th=[14877], 60.00th=[17171], 00:12:29.530 | 70.00th=[18744], 80.00th=[21890], 90.00th=[25560], 95.00th=[26608], 00:12:29.530 | 99.00th=[30802], 99.50th=[30802], 99.90th=[32900], 99.95th=[35914], 00:12:29.530 | 99.99th=[39060] 00:12:29.530 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:12:29.530 slat (usec): min=4, max=11757, avg=113.58, stdev=537.80 00:12:29.530 clat (usec): min=4564, max=37006, avg=16055.68, stdev=5519.24 00:12:29.530 lat (usec): min=4587, max=37824, avg=16169.26, stdev=5566.27 00:12:29.530 clat percentiles (usec): 00:12:29.530 | 1.00th=[ 5669], 5.00th=[ 7373], 10.00th=[ 9110], 20.00th=[13304], 00:12:29.530 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15401], 60.00th=[15533], 00:12:29.530 | 70.00th=[15795], 80.00th=[20317], 90.00th=[24773], 95.00th=[26608], 00:12:29.530 | 99.00th=[32637], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:12:29.530 | 99.99th=[36963] 00:12:29.530 bw ( KiB/s): min=14576, max=17648, per=22.31%, avg=16112.00, stdev=2172.23, samples=2 00:12:29.530 iops : min= 3644, max= 4412, avg=4028.00, stdev=543.06, samples=2 00:12:29.530 lat (msec) : 10=7.12%, 20=70.04%, 50=22.84% 00:12:29.530 cpu : usr=3.86%, sys=10.09%, ctx=684, majf=0, minf=10 00:12:29.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:29.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:29.530 issued rwts: total=3644,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:29.530 00:12:29.530 Run status group 0 (all jobs): 00:12:29.530 READ: bw=64.4MiB/s (67.6MB/s), 14.1MiB/s-18.4MiB/s (14.7MB/s-19.3MB/s), io=65.2MiB (68.4MB), run=1005-1012msec 00:12:29.530 WRITE: bw=70.5MiB/s (73.9MB/s), 15.8MiB/s-19.9MiB/s (16.6MB/s-20.9MB/s), io=71.4MiB (74.8MB), run=1005-1012msec 00:12:29.530 00:12:29.530 Disk stats (read/write): 00:12:29.530 nvme0n1: ios=4146/4209, merge=0/0, ticks=53305/50384, in_queue=103689, util=87.98% 00:12:29.530 nvme0n2: ios=3700/4096, merge=0/0, ticks=47816/48335, in_queue=96151, util=88.59% 00:12:29.530 nvme0n3: ios=3584/3863, merge=0/0, ticks=52689/51364, in_queue=104053, util=89.20% 00:12:29.530 nvme0n4: ios=3370/3584, merge=0/0, ticks=49109/47305, in_queue=96414, util=89.86% 00:12:29.530 15:35:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:29.530 15:35:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77097 00:12:29.530 15:35:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:29.530 15:35:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:29.530 [global] 00:12:29.530 thread=1 00:12:29.530 invalidate=1 00:12:29.530 rw=read 00:12:29.530 time_based=1 00:12:29.530 runtime=10 00:12:29.530 ioengine=libaio 00:12:29.530 direct=1 00:12:29.530 bs=4096 00:12:29.530 iodepth=1 00:12:29.530 norandommap=1 00:12:29.530 numjobs=1 00:12:29.530 00:12:29.530 [job0] 00:12:29.530 filename=/dev/nvme0n1 00:12:29.530 [job1] 00:12:29.530 filename=/dev/nvme0n2 00:12:29.530 [job2] 00:12:29.530 filename=/dev/nvme0n3 00:12:29.530 [job3] 00:12:29.530 filename=/dev/nvme0n4 00:12:29.530 Could not set queue depth (nvme0n1) 00:12:29.530 Could not set queue depth (nvme0n2) 00:12:29.530 Could not set queue depth (nvme0n3) 00:12:29.530 Could not set queue depth (nvme0n4) 00:12:29.530 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.530 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.530 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.530 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.530 fio-3.35 00:12:29.530 Starting 4 threads 00:12:32.812 15:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:32.812 fio: pid=77146, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:32.812 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=64401408, buflen=4096 00:12:32.812 15:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:32.812 fio: pid=77145, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:32.812 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=49512448, buflen=4096 00:12:32.812 15:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:32.812 15:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:33.070 fio: pid=77143, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:33.070 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=5914624, buflen=4096 00:12:33.070 15:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:33.070 15:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:33.329 fio: pid=77144, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:33.329 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=61857792, buflen=4096 00:12:33.329 15:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:33.329 15:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:33.329 00:12:33.329 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77143: Mon Jul 15 15:35:28 2024 00:12:33.329 read: IOPS=5176, BW=20.2MiB/s (21.2MB/s)(69.6MiB/3444msec) 00:12:33.329 slat (usec): min=10, max=16440, avg=17.84, stdev=172.44 00:12:33.329 clat (usec): min=131, max=3270, avg=174.00, stdev=48.92 00:12:33.329 lat (usec): min=144, max=16641, avg=191.83, stdev=180.00 00:12:33.329 clat percentiles (usec): 00:12:33.329 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:12:33.330 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:12:33.330 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 206], 00:12:33.330 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 469], 99.95th=[ 545], 00:12:33.330 | 99.99th=[ 3261] 00:12:33.330 bw ( KiB/s): min=20344, max=21576, per=32.37%, avg=21254.67, stdev=479.34, samples=6 00:12:33.330 iops : min= 5086, max= 5394, avg=5313.67, stdev=119.84, samples=6 00:12:33.330 lat (usec) : 250=96.25%, 500=3.67%, 750=0.04% 00:12:33.330 lat (msec) : 2=0.01%, 4=0.02% 00:12:33.330 cpu : usr=1.45%, sys=6.42%, ctx=17838, majf=0, minf=1 00:12:33.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.330 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.330 issued rwts: total=17829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.330 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77144: Mon Jul 15 15:35:28 2024 00:12:33.330 read: IOPS=4081, BW=15.9MiB/s (16.7MB/s)(59.0MiB/3700msec) 00:12:33.330 slat (usec): min=7, max=10349, avg=18.24, stdev=173.07 00:12:33.330 clat (usec): min=61, max=7157, avg=225.28, stdev=118.14 00:12:33.330 lat (usec): min=138, max=10561, avg=243.52, stdev=209.67 00:12:33.330 clat percentiles (usec): 00:12:33.330 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 159], 00:12:33.330 | 30.00th=[ 167], 40.00th=[ 215], 50.00th=[ 245], 60.00th=[ 253], 00:12:33.330 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:12:33.330 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 515], 99.95th=[ 1188], 00:12:33.330 | 99.99th=[ 7177] 00:12:33.330 bw ( KiB/s): min=13560, max=21239, per=24.40%, avg=16022.71, stdev=3132.54, samples=7 00:12:33.330 iops : min= 3390, max= 5309, avg=4005.57, stdev=782.93, samples=7 00:12:33.330 lat (usec) : 100=0.01%, 250=56.29%, 500=43.59%, 750=0.03%, 1000=0.02% 00:12:33.330 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02% 00:12:33.330 cpu : usr=1.16%, sys=5.57%, ctx=15126, majf=0, minf=1 00:12:33.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.330 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.330 issued rwts: total=15103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.330 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77145: Mon Jul 15 15:35:28 2024 00:12:33.330 read: IOPS=3769, BW=14.7MiB/s (15.4MB/s)(47.2MiB/3207msec) 00:12:33.330 slat (usec): min=8, max=7805, avg=15.08, stdev=97.68 00:12:33.330 clat (usec): min=150, max=1357, avg=248.73, stdev=45.21 00:12:33.330 lat (usec): min=165, max=8053, avg=263.82, stdev=107.41 00:12:33.330 clat percentiles (usec): 00:12:33.330 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 219], 00:12:33.330 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:12:33.330 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:12:33.330 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 383], 99.95th=[ 433], 00:12:33.330 | 99.99th=[ 914] 00:12:33.330 bw ( KiB/s): min=13816, max=20208, per=23.23%, avg=15256.00, stdev=2442.48, samples=6 00:12:33.330 iops : min= 3454, max= 5052, avg=3814.00, stdev=610.62, samples=6 00:12:33.330 lat (usec) : 250=39.26%, 500=60.69%, 750=0.02%, 1000=0.01% 00:12:33.330 lat (msec) : 2=0.01% 00:12:33.330 cpu : usr=0.84%, sys=5.12%, ctx=12108, majf=0, minf=1 00:12:33.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.330 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.330 issued rwts: total=12089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.330 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77146: Mon Jul 15 15:35:28 2024 00:12:33.330 read: IOPS=5337, BW=20.8MiB/s (21.9MB/s)(61.4MiB/2946msec) 00:12:33.330 slat (nsec): min=12482, max=92263, avg=15492.65, stdev=3504.54 00:12:33.330 clat (usec): min=140, max=1990, avg=170.60, stdev=28.81 00:12:33.330 lat (usec): min=154, max=2004, avg=186.09, stdev=29.11 00:12:33.330 clat percentiles (usec): 00:12:33.330 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:12:33.330 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:12:33.330 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:12:33.330 | 99.00th=[ 208], 99.50th=[ 269], 99.90th=[ 478], 99.95th=[ 635], 00:12:33.330 | 99.99th=[ 1729] 00:12:33.330 bw ( KiB/s): min=21008, max=21560, per=32.54%, avg=21368.00, stdev=223.07, samples=5 00:12:33.330 iops : min= 5252, max= 5390, avg=5342.00, stdev=55.77, samples=5 00:12:33.330 lat (usec) : 250=99.43%, 500=0.47%, 750=0.07%, 1000=0.01% 00:12:33.330 lat (msec) : 2=0.01% 00:12:33.330 cpu : usr=1.15%, sys=6.99%, ctx=15724, majf=0, minf=1 00:12:33.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.330 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.330 issued rwts: total=15724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.330 00:12:33.330 Run status group 0 (all jobs): 00:12:33.330 READ: bw=64.1MiB/s (67.2MB/s), 14.7MiB/s-20.8MiB/s (15.4MB/s-21.9MB/s), io=237MiB (249MB), run=2946-3700msec 00:12:33.330 00:12:33.330 Disk stats (read/write): 00:12:33.330 nvme0n1: ios=17409/0, merge=0/0, ticks=3149/0, in_queue=3149, util=95.16% 00:12:33.330 nvme0n2: ios=14620/0, merge=0/0, ticks=3318/0, in_queue=3318, util=95.29% 00:12:33.330 nvme0n3: ios=11803/0, merge=0/0, ticks=2832/0, in_queue=2832, util=96.40% 00:12:33.330 nvme0n4: ios=15304/0, merge=0/0, ticks=2736/0, in_queue=2736, util=96.76% 00:12:33.589 15:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:33.589 15:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:33.847 15:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:33.847 15:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:34.106 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:34.106 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:34.364 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:34.364 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77097 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.622 nvmf hotplug test: fio failed as expected 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:34.622 15:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.882 15:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.141 rmmod nvme_tcp 00:12:35.141 rmmod nvme_fabrics 00:12:35.141 rmmod nvme_keyring 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76607 ']' 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76607 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76607 ']' 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76607 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76607 00:12:35.141 killing process with pid 76607 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76607' 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76607 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76607 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.141 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.142 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.142 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.142 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.142 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.142 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.142 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.402 15:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:35.402 ************************************ 00:12:35.402 END TEST nvmf_fio_target 00:12:35.402 ************************************ 00:12:35.402 00:12:35.402 real 0m19.483s 00:12:35.402 user 1m14.711s 00:12:35.402 sys 0m9.023s 00:12:35.402 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.402 15:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 15:35:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:35.402 15:35:30 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:35.402 15:35:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.402 15:35:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.402 15:35:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 ************************************ 00:12:35.402 START TEST nvmf_bdevio 00:12:35.402 ************************************ 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:35.402 * Looking for test storage... 00:12:35.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:35.402 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:35.403 Cannot find device "nvmf_tgt_br" 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.403 Cannot find device "nvmf_tgt_br2" 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:35.403 Cannot find device "nvmf_tgt_br" 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:35.403 Cannot find device "nvmf_tgt_br2" 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:12:35.403 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:35.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:12:35.662 00:12:35.662 --- 10.0.0.2 ping statistics --- 00:12:35.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.662 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:35.662 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.662 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:35.662 00:12:35.662 --- 10.0.0.3 ping statistics --- 00:12:35.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.662 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:35.662 00:12:35.662 --- 10.0.0.1 ping statistics --- 00:12:35.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.662 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77468 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77468 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77468 ']' 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.662 15:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.921 [2024-07-15 15:35:30.839963] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:35.921 [2024-07-15 15:35:30.840076] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.921 [2024-07-15 15:35:30.980793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.921 [2024-07-15 15:35:31.032891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.921 [2024-07-15 15:35:31.032955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.921 [2024-07-15 15:35:31.032965] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.921 [2024-07-15 15:35:31.032973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.921 [2024-07-15 15:35:31.032979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.921 [2024-07-15 15:35:31.033147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:35.921 [2024-07-15 15:35:31.033188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:35.921 [2024-07-15 15:35:31.033316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:35.921 [2024-07-15 15:35:31.033679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.179 [2024-07-15 15:35:31.175585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.179 Malloc0 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.179 [2024-07-15 15:35:31.235581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:36.179 { 00:12:36.179 "params": { 00:12:36.179 "name": "Nvme$subsystem", 00:12:36.179 "trtype": "$TEST_TRANSPORT", 00:12:36.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:36.179 "adrfam": "ipv4", 00:12:36.179 "trsvcid": "$NVMF_PORT", 00:12:36.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:36.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:36.179 "hdgst": ${hdgst:-false}, 00:12:36.179 "ddgst": ${ddgst:-false} 00:12:36.179 }, 00:12:36.179 "method": "bdev_nvme_attach_controller" 00:12:36.179 } 00:12:36.179 EOF 00:12:36.179 )") 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:36.179 15:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:36.179 "params": { 00:12:36.179 "name": "Nvme1", 00:12:36.179 "trtype": "tcp", 00:12:36.179 "traddr": "10.0.0.2", 00:12:36.179 "adrfam": "ipv4", 00:12:36.179 "trsvcid": "4420", 00:12:36.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:36.179 "hdgst": false, 00:12:36.179 "ddgst": false 00:12:36.179 }, 00:12:36.179 "method": "bdev_nvme_attach_controller" 00:12:36.179 }' 00:12:36.180 [2024-07-15 15:35:31.290592] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:36.180 [2024-07-15 15:35:31.290674] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77503 ] 00:12:36.438 [2024-07-15 15:35:31.427175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.438 [2024-07-15 15:35:31.490966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.438 [2024-07-15 15:35:31.491627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.438 [2024-07-15 15:35:31.491641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.696 I/O targets: 00:12:36.696 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:36.696 00:12:36.696 00:12:36.696 CUnit - A unit testing framework for C - Version 2.1-3 00:12:36.696 http://cunit.sourceforge.net/ 00:12:36.696 00:12:36.696 00:12:36.696 Suite: bdevio tests on: Nvme1n1 00:12:36.696 Test: blockdev write read block ...passed 00:12:36.696 Test: blockdev write zeroes read block ...passed 00:12:36.696 Test: blockdev write zeroes read no split ...passed 00:12:36.696 Test: blockdev write zeroes read split ...passed 00:12:36.696 Test: blockdev write zeroes read split partial ...passed 00:12:36.696 Test: blockdev reset ...[2024-07-15 15:35:31.747315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:36.696 [2024-07-15 15:35:31.747423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1990180 (9): Bad file descriptor 00:12:36.696 passed 00:12:36.696 Test: blockdev write read 8 blocks ...[2024-07-15 15:35:31.761858] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:36.696 passed 00:12:36.696 Test: blockdev write read size > 128k ...passed 00:12:36.696 Test: blockdev write read invalid size ...passed 00:12:36.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.696 Test: blockdev write read max offset ...passed 00:12:36.954 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.954 Test: blockdev writev readv 8 blocks ...passed 00:12:36.954 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.954 Test: blockdev writev readv block ...passed 00:12:36.955 Test: blockdev writev readv size > 128k ...passed 00:12:36.955 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.955 Test: blockdev comparev and writev ...[2024-07-15 15:35:31.935166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.955 [2024-07-15 15:35:31.935241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:36.955 [2024-07-15 15:35:31.935261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.955 [2024-07-15 15:35:31.935273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:36.955 [2024-07-15 15:35:31.935583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.955 [2024-07-15 15:35:31.935602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:36.955 [2024-07-15 15:35:31.935619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.955 [2024-07-15 15:35:31.935630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:36.955 [2024-07-15 15:35:31.935916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.955 [2024-07-15 15:35:31.935947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:36.955 [2024-07-15 15:35:31.935963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.955 [2024-07-15 15:35:31.935973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:36.955 [2024-07-15 15:35:31.936243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.955 [2024-07-15 15:35:31.936259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:36.955 [2024-07-15 15:35:31.936275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.955 [2024-07-15 15:35:31.936284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:36.955 passed 00:12:36.955 Test: blockdev nvme passthru rw ...passed 00:12:36.955 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:35:32.017838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.955 [2024-07-15 15:35:32.017880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:36.955 [2024-07-15 15:35:32.018017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.955 [2024-07-15 15:35:32.018034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:36.955 [2024-07-15 15:35:32.018154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.955 [2024-07-15 15:35:32.018170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:36.955 passed 00:12:36.955 Test: blockdev nvme admin passthru ...[2024-07-15 15:35:32.018275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.955 [2024-07-15 15:35:32.018296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:36.955 passed 00:12:36.955 Test: blockdev copy ...passed 00:12:36.955 00:12:36.955 Run Summary: Type Total Ran Passed Failed Inactive 00:12:36.955 suites 1 1 n/a 0 0 00:12:36.955 tests 23 23 23 0 0 00:12:36.955 asserts 152 152 152 0 n/a 00:12:36.955 00:12:36.955 Elapsed time = 0.895 seconds 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:37.213 rmmod nvme_tcp 00:12:37.213 rmmod nvme_fabrics 00:12:37.213 rmmod nvme_keyring 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77468 ']' 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77468 00:12:37.213 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77468 ']' 00:12:37.214 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77468 00:12:37.214 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:12:37.214 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:37.214 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77468 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:37.472 killing process with pid 77468 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77468' 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77468 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77468 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:37.472 00:12:37.472 real 0m2.209s 00:12:37.472 user 0m7.595s 00:12:37.472 sys 0m0.622s 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.472 15:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:37.472 ************************************ 00:12:37.472 END TEST nvmf_bdevio 00:12:37.472 ************************************ 00:12:37.743 15:35:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:37.743 15:35:32 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:37.743 15:35:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.743 15:35:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.743 15:35:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.743 ************************************ 00:12:37.743 START TEST nvmf_auth_target 00:12:37.743 ************************************ 00:12:37.743 15:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:37.743 * Looking for test storage... 00:12:37.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:37.743 15:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:37.743 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:37.743 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.743 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.743 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.743 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.743 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:37.744 Cannot find device "nvmf_tgt_br" 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:37.744 Cannot find device "nvmf_tgt_br2" 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:37.744 Cannot find device "nvmf_tgt_br" 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:37.744 Cannot find device "nvmf_tgt_br2" 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:37.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:37.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:37.744 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:38.004 15:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:38.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:38.004 00:12:38.004 --- 10.0.0.2 ping statistics --- 00:12:38.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.004 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:38.004 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:38.004 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:38.004 00:12:38.004 --- 10.0.0.3 ping statistics --- 00:12:38.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.004 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:38.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:38.004 00:12:38.004 --- 10.0.0.1 ping statistics --- 00:12:38.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.004 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77684 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77684 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77684 ']' 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.004 15:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77728 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=81b316946608e8650ed1ff715e752789d43d67e093f76d74 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ttb 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 81b316946608e8650ed1ff715e752789d43d67e093f76d74 0 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 81b316946608e8650ed1ff715e752789d43d67e093f76d74 0 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=81b316946608e8650ed1ff715e752789d43d67e093f76d74 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ttb 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ttb 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ttb 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7cee047cea86a8c9b0c6e0af27162d0f8dee668276b83a2d1f3acb79b3b9f55d 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NtD 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7cee047cea86a8c9b0c6e0af27162d0f8dee668276b83a2d1f3acb79b3b9f55d 3 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7cee047cea86a8c9b0c6e0af27162d0f8dee668276b83a2d1f3acb79b3b9f55d 3 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7cee047cea86a8c9b0c6e0af27162d0f8dee668276b83a2d1f3acb79b3b9f55d 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NtD 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NtD 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.NtD 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0f084e0c52ca3f35f50fc5e9716faf81 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dJB 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0f084e0c52ca3f35f50fc5e9716faf81 1 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0f084e0c52ca3f35f50fc5e9716faf81 1 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0f084e0c52ca3f35f50fc5e9716faf81 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dJB 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dJB 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.dJB 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:39.385 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=70a25d91ac66c3505e88ef5ab030d99c108960d17eb52edb 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.PRq 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 70a25d91ac66c3505e88ef5ab030d99c108960d17eb52edb 2 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 70a25d91ac66c3505e88ef5ab030d99c108960d17eb52edb 2 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=70a25d91ac66c3505e88ef5ab030d99c108960d17eb52edb 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.PRq 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.PRq 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.PRq 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=738730814ee7ebac250f300e7d30c0053961025e08996b84 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7My 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 738730814ee7ebac250f300e7d30c0053961025e08996b84 2 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 738730814ee7ebac250f300e7d30c0053961025e08996b84 2 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=738730814ee7ebac250f300e7d30c0053961025e08996b84 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:39.386 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7My 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7My 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.7My 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=38802a0df1fb50bcf4289c7e1373b674 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.t3F 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 38802a0df1fb50bcf4289c7e1373b674 1 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 38802a0df1fb50bcf4289c7e1373b674 1 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=38802a0df1fb50bcf4289c7e1373b674 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.t3F 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.t3F 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.t3F 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=26aa6b2be617268773adc46cc034395a35f541ec81408755f0fcdef82fff45aa 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DRX 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 26aa6b2be617268773adc46cc034395a35f541ec81408755f0fcdef82fff45aa 3 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 26aa6b2be617268773adc46cc034395a35f541ec81408755f0fcdef82fff45aa 3 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=26aa6b2be617268773adc46cc034395a35f541ec81408755f0fcdef82fff45aa 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DRX 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DRX 00:12:39.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.DRX 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77684 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77684 ']' 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.645 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:39.906 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.906 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:39.906 15:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77728 /var/tmp/host.sock 00:12:39.906 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77728 ']' 00:12:39.906 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:39.906 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.906 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:39.906 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.906 15:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ttb 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ttb 00:12:40.164 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ttb 00:12:40.423 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.NtD ]] 00:12:40.423 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NtD 00:12:40.423 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.423 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.423 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.423 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NtD 00:12:40.423 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NtD 00:12:40.682 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:40.682 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dJB 00:12:40.682 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.682 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.682 15:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.682 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dJB 00:12:40.682 15:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dJB 00:12:40.941 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.PRq ]] 00:12:40.941 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PRq 00:12:40.941 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.941 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.941 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.941 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PRq 00:12:40.941 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PRq 00:12:41.199 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:41.199 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.7My 00:12:41.199 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.199 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.199 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.199 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.7My 00:12:41.199 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.7My 00:12:41.458 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.t3F ]] 00:12:41.458 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t3F 00:12:41.458 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.458 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.458 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.458 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t3F 00:12:41.458 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t3F 00:12:41.717 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:41.717 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DRX 00:12:41.717 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.717 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.717 15:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.717 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.DRX 00:12:41.717 15:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.DRX 00:12:41.975 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:12:41.975 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:41.975 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.975 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.975 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:41.975 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.233 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.492 00:12:42.492 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.492 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.492 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.750 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.750 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.750 15:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.750 15:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.750 15:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.750 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.750 { 00:12:42.750 "auth": { 00:12:42.750 "dhgroup": "null", 00:12:42.750 "digest": "sha256", 00:12:42.750 "state": "completed" 00:12:42.750 }, 00:12:42.750 "cntlid": 1, 00:12:42.750 "listen_address": { 00:12:42.750 "adrfam": "IPv4", 00:12:42.750 "traddr": "10.0.0.2", 00:12:42.750 "trsvcid": "4420", 00:12:42.750 "trtype": "TCP" 00:12:42.750 }, 00:12:42.750 "peer_address": { 00:12:42.750 "adrfam": "IPv4", 00:12:42.750 "traddr": "10.0.0.1", 00:12:42.750 "trsvcid": "50814", 00:12:42.750 "trtype": "TCP" 00:12:42.750 }, 00:12:42.750 "qid": 0, 00:12:42.750 "state": "enabled", 00:12:42.750 "thread": "nvmf_tgt_poll_group_000" 00:12:42.750 } 00:12:42.750 ]' 00:12:42.750 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.009 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:43.009 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.009 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:43.009 15:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.009 15:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.009 15:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.009 15:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.268 15:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:12:47.565 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.565 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:47.565 15:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.565 15:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.565 15:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.565 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.565 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:47.565 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.824 15:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.081 00:12:48.081 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.081 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.081 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.338 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.338 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.338 15:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.338 15:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.338 15:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.338 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.338 { 00:12:48.338 "auth": { 00:12:48.338 "dhgroup": "null", 00:12:48.338 "digest": "sha256", 00:12:48.338 "state": "completed" 00:12:48.338 }, 00:12:48.338 "cntlid": 3, 00:12:48.338 "listen_address": { 00:12:48.338 "adrfam": "IPv4", 00:12:48.338 "traddr": "10.0.0.2", 00:12:48.338 "trsvcid": "4420", 00:12:48.338 "trtype": "TCP" 00:12:48.338 }, 00:12:48.338 "peer_address": { 00:12:48.338 "adrfam": "IPv4", 00:12:48.338 "traddr": "10.0.0.1", 00:12:48.338 "trsvcid": "54512", 00:12:48.338 "trtype": "TCP" 00:12:48.338 }, 00:12:48.338 "qid": 0, 00:12:48.338 "state": "enabled", 00:12:48.338 "thread": "nvmf_tgt_poll_group_000" 00:12:48.338 } 00:12:48.338 ]' 00:12:48.338 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.338 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.338 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.596 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:48.596 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.596 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.596 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.596 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.854 15:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.788 15:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.046 00:12:50.046 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.046 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.046 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.304 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.304 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.304 15:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.304 15:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.304 15:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.304 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.304 { 00:12:50.304 "auth": { 00:12:50.304 "dhgroup": "null", 00:12:50.304 "digest": "sha256", 00:12:50.304 "state": "completed" 00:12:50.304 }, 00:12:50.304 "cntlid": 5, 00:12:50.304 "listen_address": { 00:12:50.304 "adrfam": "IPv4", 00:12:50.304 "traddr": "10.0.0.2", 00:12:50.304 "trsvcid": "4420", 00:12:50.304 "trtype": "TCP" 00:12:50.304 }, 00:12:50.304 "peer_address": { 00:12:50.304 "adrfam": "IPv4", 00:12:50.304 "traddr": "10.0.0.1", 00:12:50.304 "trsvcid": "54548", 00:12:50.304 "trtype": "TCP" 00:12:50.304 }, 00:12:50.304 "qid": 0, 00:12:50.304 "state": "enabled", 00:12:50.304 "thread": "nvmf_tgt_poll_group_000" 00:12:50.304 } 00:12:50.304 ]' 00:12:50.304 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.563 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:50.563 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.563 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:50.563 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.563 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.563 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.563 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.821 15:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:51.758 15:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:52.016 00:12:52.016 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.016 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.016 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.274 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.274 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.274 15:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.274 15:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.274 15:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.274 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.274 { 00:12:52.274 "auth": { 00:12:52.274 "dhgroup": "null", 00:12:52.274 "digest": "sha256", 00:12:52.274 "state": "completed" 00:12:52.274 }, 00:12:52.274 "cntlid": 7, 00:12:52.274 "listen_address": { 00:12:52.274 "adrfam": "IPv4", 00:12:52.274 "traddr": "10.0.0.2", 00:12:52.274 "trsvcid": "4420", 00:12:52.274 "trtype": "TCP" 00:12:52.274 }, 00:12:52.274 "peer_address": { 00:12:52.274 "adrfam": "IPv4", 00:12:52.274 "traddr": "10.0.0.1", 00:12:52.274 "trsvcid": "54584", 00:12:52.274 "trtype": "TCP" 00:12:52.274 }, 00:12:52.274 "qid": 0, 00:12:52.274 "state": "enabled", 00:12:52.274 "thread": "nvmf_tgt_poll_group_000" 00:12:52.274 } 00:12:52.274 ]' 00:12:52.274 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.532 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:52.532 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.532 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:52.532 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.532 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.532 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.532 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.790 15:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:12:53.725 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.725 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:53.725 15:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.725 15:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.725 15:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.725 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:53.725 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.725 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:53.726 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:53.726 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:12:53.726 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.984 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:53.984 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:53.984 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:53.984 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.985 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.985 15:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.985 15:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.985 15:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.985 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.985 15:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.244 00:12:54.244 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.244 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.244 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.503 { 00:12:54.503 "auth": { 00:12:54.503 "dhgroup": "ffdhe2048", 00:12:54.503 "digest": "sha256", 00:12:54.503 "state": "completed" 00:12:54.503 }, 00:12:54.503 "cntlid": 9, 00:12:54.503 "listen_address": { 00:12:54.503 "adrfam": "IPv4", 00:12:54.503 "traddr": "10.0.0.2", 00:12:54.503 "trsvcid": "4420", 00:12:54.503 "trtype": "TCP" 00:12:54.503 }, 00:12:54.503 "peer_address": { 00:12:54.503 "adrfam": "IPv4", 00:12:54.503 "traddr": "10.0.0.1", 00:12:54.503 "trsvcid": "54616", 00:12:54.503 "trtype": "TCP" 00:12:54.503 }, 00:12:54.503 "qid": 0, 00:12:54.503 "state": "enabled", 00:12:54.503 "thread": "nvmf_tgt_poll_group_000" 00:12:54.503 } 00:12:54.503 ]' 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:54.503 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.762 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.762 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.762 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.021 15:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:12:55.589 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.589 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:55.589 15:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.589 15:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.589 15:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.589 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.589 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:55.589 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.847 15:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.106 00:12:56.106 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.106 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.106 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.365 { 00:12:56.365 "auth": { 00:12:56.365 "dhgroup": "ffdhe2048", 00:12:56.365 "digest": "sha256", 00:12:56.365 "state": "completed" 00:12:56.365 }, 00:12:56.365 "cntlid": 11, 00:12:56.365 "listen_address": { 00:12:56.365 "adrfam": "IPv4", 00:12:56.365 "traddr": "10.0.0.2", 00:12:56.365 "trsvcid": "4420", 00:12:56.365 "trtype": "TCP" 00:12:56.365 }, 00:12:56.365 "peer_address": { 00:12:56.365 "adrfam": "IPv4", 00:12:56.365 "traddr": "10.0.0.1", 00:12:56.365 "trsvcid": "38706", 00:12:56.365 "trtype": "TCP" 00:12:56.365 }, 00:12:56.365 "qid": 0, 00:12:56.365 "state": "enabled", 00:12:56.365 "thread": "nvmf_tgt_poll_group_000" 00:12:56.365 } 00:12:56.365 ]' 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:56.365 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.631 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.631 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.631 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.891 15:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:12:57.460 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.460 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:57.460 15:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.460 15:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.460 15:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.460 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.460 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:57.460 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:57.720 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:12:57.720 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.720 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:57.720 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:57.720 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:57.720 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.720 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.720 15:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.720 15:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.979 15:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.979 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.979 15:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.238 00:12:58.238 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.238 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.239 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.498 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.498 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.498 15:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.498 15:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.498 15:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.498 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.498 { 00:12:58.499 "auth": { 00:12:58.499 "dhgroup": "ffdhe2048", 00:12:58.499 "digest": "sha256", 00:12:58.499 "state": "completed" 00:12:58.499 }, 00:12:58.499 "cntlid": 13, 00:12:58.499 "listen_address": { 00:12:58.499 "adrfam": "IPv4", 00:12:58.499 "traddr": "10.0.0.2", 00:12:58.499 "trsvcid": "4420", 00:12:58.499 "trtype": "TCP" 00:12:58.499 }, 00:12:58.499 "peer_address": { 00:12:58.499 "adrfam": "IPv4", 00:12:58.499 "traddr": "10.0.0.1", 00:12:58.499 "trsvcid": "38742", 00:12:58.499 "trtype": "TCP" 00:12:58.499 }, 00:12:58.499 "qid": 0, 00:12:58.499 "state": "enabled", 00:12:58.499 "thread": "nvmf_tgt_poll_group_000" 00:12:58.499 } 00:12:58.499 ]' 00:12:58.499 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.499 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.499 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.499 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:58.499 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.758 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.758 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.758 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.017 15:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:12:59.585 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.585 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:12:59.585 15:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.585 15:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.585 15:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.585 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.585 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:59.585 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.844 15:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.103 00:13:00.103 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.103 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.103 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.363 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.363 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.363 15:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.363 15:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.363 15:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.363 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.363 { 00:13:00.363 "auth": { 00:13:00.363 "dhgroup": "ffdhe2048", 00:13:00.363 "digest": "sha256", 00:13:00.363 "state": "completed" 00:13:00.363 }, 00:13:00.363 "cntlid": 15, 00:13:00.363 "listen_address": { 00:13:00.363 "adrfam": "IPv4", 00:13:00.363 "traddr": "10.0.0.2", 00:13:00.363 "trsvcid": "4420", 00:13:00.363 "trtype": "TCP" 00:13:00.363 }, 00:13:00.363 "peer_address": { 00:13:00.363 "adrfam": "IPv4", 00:13:00.363 "traddr": "10.0.0.1", 00:13:00.363 "trsvcid": "38772", 00:13:00.363 "trtype": "TCP" 00:13:00.363 }, 00:13:00.363 "qid": 0, 00:13:00.363 "state": "enabled", 00:13:00.363 "thread": "nvmf_tgt_poll_group_000" 00:13:00.363 } 00:13:00.363 ]' 00:13:00.363 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.363 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.363 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.622 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:00.622 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.622 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.622 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.622 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.881 15:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:13:01.449 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.449 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:01.450 15:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.450 15:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.450 15:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.450 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.450 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.450 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:01.450 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.709 15:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.276 00:13:02.276 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.276 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.276 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.534 { 00:13:02.534 "auth": { 00:13:02.534 "dhgroup": "ffdhe3072", 00:13:02.534 "digest": "sha256", 00:13:02.534 "state": "completed" 00:13:02.534 }, 00:13:02.534 "cntlid": 17, 00:13:02.534 "listen_address": { 00:13:02.534 "adrfam": "IPv4", 00:13:02.534 "traddr": "10.0.0.2", 00:13:02.534 "trsvcid": "4420", 00:13:02.534 "trtype": "TCP" 00:13:02.534 }, 00:13:02.534 "peer_address": { 00:13:02.534 "adrfam": "IPv4", 00:13:02.534 "traddr": "10.0.0.1", 00:13:02.534 "trsvcid": "38806", 00:13:02.534 "trtype": "TCP" 00:13:02.534 }, 00:13:02.534 "qid": 0, 00:13:02.534 "state": "enabled", 00:13:02.534 "thread": "nvmf_tgt_poll_group_000" 00:13:02.534 } 00:13:02.534 ]' 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.534 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.792 15:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.789 15:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.355 00:13:04.355 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:04.355 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:04.355 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:04.632 { 00:13:04.632 "auth": { 00:13:04.632 "dhgroup": "ffdhe3072", 00:13:04.632 "digest": "sha256", 00:13:04.632 "state": "completed" 00:13:04.632 }, 00:13:04.632 "cntlid": 19, 00:13:04.632 "listen_address": { 00:13:04.632 "adrfam": "IPv4", 00:13:04.632 "traddr": "10.0.0.2", 00:13:04.632 "trsvcid": "4420", 00:13:04.632 "trtype": "TCP" 00:13:04.632 }, 00:13:04.632 "peer_address": { 00:13:04.632 "adrfam": "IPv4", 00:13:04.632 "traddr": "10.0.0.1", 00:13:04.632 "trsvcid": "38840", 00:13:04.632 "trtype": "TCP" 00:13:04.632 }, 00:13:04.632 "qid": 0, 00:13:04.632 "state": "enabled", 00:13:04.632 "thread": "nvmf_tgt_poll_group_000" 00:13:04.632 } 00:13:04.632 ]' 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.632 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.890 15:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:13:05.456 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.456 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:05.456 15:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.456 15:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.456 15:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.456 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.456 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:05.456 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.714 15:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.282 00:13:06.282 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.282 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.282 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.540 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.540 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.540 15:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.540 15:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.540 15:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.540 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:06.540 { 00:13:06.540 "auth": { 00:13:06.540 "dhgroup": "ffdhe3072", 00:13:06.540 "digest": "sha256", 00:13:06.540 "state": "completed" 00:13:06.540 }, 00:13:06.540 "cntlid": 21, 00:13:06.540 "listen_address": { 00:13:06.540 "adrfam": "IPv4", 00:13:06.540 "traddr": "10.0.0.2", 00:13:06.540 "trsvcid": "4420", 00:13:06.540 "trtype": "TCP" 00:13:06.540 }, 00:13:06.540 "peer_address": { 00:13:06.540 "adrfam": "IPv4", 00:13:06.541 "traddr": "10.0.0.1", 00:13:06.541 "trsvcid": "44758", 00:13:06.541 "trtype": "TCP" 00:13:06.541 }, 00:13:06.541 "qid": 0, 00:13:06.541 "state": "enabled", 00:13:06.541 "thread": "nvmf_tgt_poll_group_000" 00:13:06.541 } 00:13:06.541 ]' 00:13:06.541 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.541 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:06.541 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.541 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:06.541 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.541 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.541 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.541 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.799 15:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.734 15:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.992 15:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.993 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.993 15:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:08.252 00:13:08.252 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.252 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.252 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.512 { 00:13:08.512 "auth": { 00:13:08.512 "dhgroup": "ffdhe3072", 00:13:08.512 "digest": "sha256", 00:13:08.512 "state": "completed" 00:13:08.512 }, 00:13:08.512 "cntlid": 23, 00:13:08.512 "listen_address": { 00:13:08.512 "adrfam": "IPv4", 00:13:08.512 "traddr": "10.0.0.2", 00:13:08.512 "trsvcid": "4420", 00:13:08.512 "trtype": "TCP" 00:13:08.512 }, 00:13:08.512 "peer_address": { 00:13:08.512 "adrfam": "IPv4", 00:13:08.512 "traddr": "10.0.0.1", 00:13:08.512 "trsvcid": "44778", 00:13:08.512 "trtype": "TCP" 00:13:08.512 }, 00:13:08.512 "qid": 0, 00:13:08.512 "state": "enabled", 00:13:08.512 "thread": "nvmf_tgt_poll_group_000" 00:13:08.512 } 00:13:08.512 ]' 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:08.512 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.772 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.772 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.772 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.031 15:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:13:09.598 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.598 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:09.598 15:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.598 15:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.598 15:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.598 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.598 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.598 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:09.598 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.857 15:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.116 00:13:10.374 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.374 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.374 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.633 { 00:13:10.633 "auth": { 00:13:10.633 "dhgroup": "ffdhe4096", 00:13:10.633 "digest": "sha256", 00:13:10.633 "state": "completed" 00:13:10.633 }, 00:13:10.633 "cntlid": 25, 00:13:10.633 "listen_address": { 00:13:10.633 "adrfam": "IPv4", 00:13:10.633 "traddr": "10.0.0.2", 00:13:10.633 "trsvcid": "4420", 00:13:10.633 "trtype": "TCP" 00:13:10.633 }, 00:13:10.633 "peer_address": { 00:13:10.633 "adrfam": "IPv4", 00:13:10.633 "traddr": "10.0.0.1", 00:13:10.633 "trsvcid": "44810", 00:13:10.633 "trtype": "TCP" 00:13:10.633 }, 00:13:10.633 "qid": 0, 00:13:10.633 "state": "enabled", 00:13:10.633 "thread": "nvmf_tgt_poll_group_000" 00:13:10.633 } 00:13:10.633 ]' 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.633 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.891 15:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:13:11.457 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.457 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:11.457 15:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.457 15:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.716 15:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.282 00:13:12.282 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.282 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.282 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.541 { 00:13:12.541 "auth": { 00:13:12.541 "dhgroup": "ffdhe4096", 00:13:12.541 "digest": "sha256", 00:13:12.541 "state": "completed" 00:13:12.541 }, 00:13:12.541 "cntlid": 27, 00:13:12.541 "listen_address": { 00:13:12.541 "adrfam": "IPv4", 00:13:12.541 "traddr": "10.0.0.2", 00:13:12.541 "trsvcid": "4420", 00:13:12.541 "trtype": "TCP" 00:13:12.541 }, 00:13:12.541 "peer_address": { 00:13:12.541 "adrfam": "IPv4", 00:13:12.541 "traddr": "10.0.0.1", 00:13:12.541 "trsvcid": "44842", 00:13:12.541 "trtype": "TCP" 00:13:12.541 }, 00:13:12.541 "qid": 0, 00:13:12.541 "state": "enabled", 00:13:12.541 "thread": "nvmf_tgt_poll_group_000" 00:13:12.541 } 00:13:12.541 ]' 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.541 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.108 15:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:13:13.675 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.675 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:13.675 15:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.675 15:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.675 15:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.675 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.675 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:13.675 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.933 15:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.191 00:13:14.449 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.449 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.449 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:14.707 { 00:13:14.707 "auth": { 00:13:14.707 "dhgroup": "ffdhe4096", 00:13:14.707 "digest": "sha256", 00:13:14.707 "state": "completed" 00:13:14.707 }, 00:13:14.707 "cntlid": 29, 00:13:14.707 "listen_address": { 00:13:14.707 "adrfam": "IPv4", 00:13:14.707 "traddr": "10.0.0.2", 00:13:14.707 "trsvcid": "4420", 00:13:14.707 "trtype": "TCP" 00:13:14.707 }, 00:13:14.707 "peer_address": { 00:13:14.707 "adrfam": "IPv4", 00:13:14.707 "traddr": "10.0.0.1", 00:13:14.707 "trsvcid": "57086", 00:13:14.707 "trtype": "TCP" 00:13:14.707 }, 00:13:14.707 "qid": 0, 00:13:14.707 "state": "enabled", 00:13:14.707 "thread": "nvmf_tgt_poll_group_000" 00:13:14.707 } 00:13:14.707 ]' 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.707 15:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.966 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:13:15.531 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.790 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:15.790 15:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.790 15:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.790 15:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.790 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:15.790 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:15.790 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.048 15:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.306 00:13:16.306 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.306 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.306 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.564 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.564 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.564 15:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.564 15:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.564 15:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.564 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:16.564 { 00:13:16.564 "auth": { 00:13:16.564 "dhgroup": "ffdhe4096", 00:13:16.564 "digest": "sha256", 00:13:16.564 "state": "completed" 00:13:16.564 }, 00:13:16.564 "cntlid": 31, 00:13:16.564 "listen_address": { 00:13:16.564 "adrfam": "IPv4", 00:13:16.564 "traddr": "10.0.0.2", 00:13:16.564 "trsvcid": "4420", 00:13:16.564 "trtype": "TCP" 00:13:16.564 }, 00:13:16.564 "peer_address": { 00:13:16.564 "adrfam": "IPv4", 00:13:16.564 "traddr": "10.0.0.1", 00:13:16.564 "trsvcid": "57110", 00:13:16.564 "trtype": "TCP" 00:13:16.564 }, 00:13:16.564 "qid": 0, 00:13:16.564 "state": "enabled", 00:13:16.564 "thread": "nvmf_tgt_poll_group_000" 00:13:16.564 } 00:13:16.564 ]' 00:13:16.564 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:16.822 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:16.823 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:16.823 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:16.823 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:16.823 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.823 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.823 15:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.081 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:13:17.647 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.647 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:17.647 15:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.647 15:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.647 15:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.647 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.647 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:17.647 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:17.647 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.906 15:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.472 00:13:18.472 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.472 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.472 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.730 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.730 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.730 15:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.730 15:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.730 15:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.730 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:18.730 { 00:13:18.730 "auth": { 00:13:18.730 "dhgroup": "ffdhe6144", 00:13:18.730 "digest": "sha256", 00:13:18.730 "state": "completed" 00:13:18.730 }, 00:13:18.730 "cntlid": 33, 00:13:18.730 "listen_address": { 00:13:18.730 "adrfam": "IPv4", 00:13:18.730 "traddr": "10.0.0.2", 00:13:18.730 "trsvcid": "4420", 00:13:18.730 "trtype": "TCP" 00:13:18.730 }, 00:13:18.730 "peer_address": { 00:13:18.730 "adrfam": "IPv4", 00:13:18.730 "traddr": "10.0.0.1", 00:13:18.730 "trsvcid": "57148", 00:13:18.730 "trtype": "TCP" 00:13:18.730 }, 00:13:18.730 "qid": 0, 00:13:18.730 "state": "enabled", 00:13:18.730 "thread": "nvmf_tgt_poll_group_000" 00:13:18.730 } 00:13:18.730 ]' 00:13:18.730 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:18.730 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:18.730 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:18.988 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:18.988 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:18.988 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.988 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.988 15:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.246 15:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:13:19.812 15:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.812 15:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:19.812 15:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.812 15:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.812 15:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.812 15:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.812 15:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:19.812 15:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.070 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.636 00:13:20.636 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:20.636 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:20.636 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.894 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.894 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.894 15:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.894 15:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.894 15:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.894 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.894 { 00:13:20.894 "auth": { 00:13:20.894 "dhgroup": "ffdhe6144", 00:13:20.894 "digest": "sha256", 00:13:20.894 "state": "completed" 00:13:20.894 }, 00:13:20.894 "cntlid": 35, 00:13:20.894 "listen_address": { 00:13:20.894 "adrfam": "IPv4", 00:13:20.894 "traddr": "10.0.0.2", 00:13:20.894 "trsvcid": "4420", 00:13:20.894 "trtype": "TCP" 00:13:20.894 }, 00:13:20.894 "peer_address": { 00:13:20.894 "adrfam": "IPv4", 00:13:20.894 "traddr": "10.0.0.1", 00:13:20.894 "trsvcid": "57164", 00:13:20.894 "trtype": "TCP" 00:13:20.894 }, 00:13:20.894 "qid": 0, 00:13:20.894 "state": "enabled", 00:13:20.894 "thread": "nvmf_tgt_poll_group_000" 00:13:20.894 } 00:13:20.894 ]' 00:13:20.894 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.894 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.894 15:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.152 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:21.152 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.152 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.152 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.152 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.409 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:13:21.975 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.975 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:21.975 15:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.975 15:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.975 15:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.975 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:21.975 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:21.975 15:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.233 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.800 00:13:22.800 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.800 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.800 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.081 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.081 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.081 15:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.081 15:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.081 15:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.081 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.081 { 00:13:23.081 "auth": { 00:13:23.081 "dhgroup": "ffdhe6144", 00:13:23.081 "digest": "sha256", 00:13:23.081 "state": "completed" 00:13:23.081 }, 00:13:23.081 "cntlid": 37, 00:13:23.081 "listen_address": { 00:13:23.081 "adrfam": "IPv4", 00:13:23.081 "traddr": "10.0.0.2", 00:13:23.081 "trsvcid": "4420", 00:13:23.081 "trtype": "TCP" 00:13:23.081 }, 00:13:23.081 "peer_address": { 00:13:23.081 "adrfam": "IPv4", 00:13:23.081 "traddr": "10.0.0.1", 00:13:23.081 "trsvcid": "57192", 00:13:23.081 "trtype": "TCP" 00:13:23.081 }, 00:13:23.081 "qid": 0, 00:13:23.081 "state": "enabled", 00:13:23.081 "thread": "nvmf_tgt_poll_group_000" 00:13:23.081 } 00:13:23.081 ]' 00:13:23.081 15:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.081 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.081 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.081 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:23.081 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.081 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.081 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.081 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.353 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:13:23.919 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.919 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:23.919 15:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.919 15:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.919 15:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.919 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.919 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:23.919 15:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.176 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.741 00:13:24.741 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.741 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.741 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.998 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.998 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.998 15:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.998 15:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.998 15:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.998 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.998 { 00:13:24.998 "auth": { 00:13:24.998 "dhgroup": "ffdhe6144", 00:13:24.998 "digest": "sha256", 00:13:24.998 "state": "completed" 00:13:24.998 }, 00:13:24.998 "cntlid": 39, 00:13:24.998 "listen_address": { 00:13:24.999 "adrfam": "IPv4", 00:13:24.999 "traddr": "10.0.0.2", 00:13:24.999 "trsvcid": "4420", 00:13:24.999 "trtype": "TCP" 00:13:24.999 }, 00:13:24.999 "peer_address": { 00:13:24.999 "adrfam": "IPv4", 00:13:24.999 "traddr": "10.0.0.1", 00:13:24.999 "trsvcid": "37970", 00:13:24.999 "trtype": "TCP" 00:13:24.999 }, 00:13:24.999 "qid": 0, 00:13:24.999 "state": "enabled", 00:13:24.999 "thread": "nvmf_tgt_poll_group_000" 00:13:24.999 } 00:13:24.999 ]' 00:13:24.999 15:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.999 15:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:24.999 15:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.999 15:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:24.999 15:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.999 15:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.999 15:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.999 15:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.255 15:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:13:26.187 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.187 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:26.187 15:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.187 15:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.187 15:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.187 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.187 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:26.187 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:26.187 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.444 15:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.010 00:13:27.010 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.010 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.010 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.269 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.269 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.269 15:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.270 15:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.270 15:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.270 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.270 { 00:13:27.270 "auth": { 00:13:27.270 "dhgroup": "ffdhe8192", 00:13:27.270 "digest": "sha256", 00:13:27.270 "state": "completed" 00:13:27.270 }, 00:13:27.270 "cntlid": 41, 00:13:27.270 "listen_address": { 00:13:27.270 "adrfam": "IPv4", 00:13:27.270 "traddr": "10.0.0.2", 00:13:27.270 "trsvcid": "4420", 00:13:27.270 "trtype": "TCP" 00:13:27.270 }, 00:13:27.270 "peer_address": { 00:13:27.270 "adrfam": "IPv4", 00:13:27.270 "traddr": "10.0.0.1", 00:13:27.270 "trsvcid": "38000", 00:13:27.270 "trtype": "TCP" 00:13:27.270 }, 00:13:27.270 "qid": 0, 00:13:27.270 "state": "enabled", 00:13:27.270 "thread": "nvmf_tgt_poll_group_000" 00:13:27.270 } 00:13:27.270 ]' 00:13:27.270 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.270 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.270 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.528 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:27.529 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.529 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.529 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.529 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.787 15:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:13:28.353 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.353 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:28.353 15:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.353 15:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.353 15:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.353 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:28.610 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:28.610 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:28.868 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:13:28.868 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.868 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:28.868 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:28.868 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:28.868 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.869 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.869 15:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.869 15:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.869 15:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.869 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.869 15:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.435 00:13:29.435 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.435 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.435 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.693 { 00:13:29.693 "auth": { 00:13:29.693 "dhgroup": "ffdhe8192", 00:13:29.693 "digest": "sha256", 00:13:29.693 "state": "completed" 00:13:29.693 }, 00:13:29.693 "cntlid": 43, 00:13:29.693 "listen_address": { 00:13:29.693 "adrfam": "IPv4", 00:13:29.693 "traddr": "10.0.0.2", 00:13:29.693 "trsvcid": "4420", 00:13:29.693 "trtype": "TCP" 00:13:29.693 }, 00:13:29.693 "peer_address": { 00:13:29.693 "adrfam": "IPv4", 00:13:29.693 "traddr": "10.0.0.1", 00:13:29.693 "trsvcid": "38010", 00:13:29.693 "trtype": "TCP" 00:13:29.693 }, 00:13:29.693 "qid": 0, 00:13:29.693 "state": "enabled", 00:13:29.693 "thread": "nvmf_tgt_poll_group_000" 00:13:29.693 } 00:13:29.693 ]' 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.693 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.952 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.952 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.952 15:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.211 15:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:13:30.779 15:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.779 15:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:30.779 15:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.779 15:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.779 15:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.779 15:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:30.779 15:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:30.779 15:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.038 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.606 00:13:31.606 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.606 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.606 15:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.173 { 00:13:32.173 "auth": { 00:13:32.173 "dhgroup": "ffdhe8192", 00:13:32.173 "digest": "sha256", 00:13:32.173 "state": "completed" 00:13:32.173 }, 00:13:32.173 "cntlid": 45, 00:13:32.173 "listen_address": { 00:13:32.173 "adrfam": "IPv4", 00:13:32.173 "traddr": "10.0.0.2", 00:13:32.173 "trsvcid": "4420", 00:13:32.173 "trtype": "TCP" 00:13:32.173 }, 00:13:32.173 "peer_address": { 00:13:32.173 "adrfam": "IPv4", 00:13:32.173 "traddr": "10.0.0.1", 00:13:32.173 "trsvcid": "38034", 00:13:32.173 "trtype": "TCP" 00:13:32.173 }, 00:13:32.173 "qid": 0, 00:13:32.173 "state": "enabled", 00:13:32.173 "thread": "nvmf_tgt_poll_group_000" 00:13:32.173 } 00:13:32.173 ]' 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.173 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.432 15:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:33.368 15:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:34.303 00:13:34.303 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.303 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.303 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.303 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.303 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.303 15:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.303 15:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.303 15:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.303 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.303 { 00:13:34.303 "auth": { 00:13:34.303 "dhgroup": "ffdhe8192", 00:13:34.303 "digest": "sha256", 00:13:34.303 "state": "completed" 00:13:34.303 }, 00:13:34.303 "cntlid": 47, 00:13:34.303 "listen_address": { 00:13:34.303 "adrfam": "IPv4", 00:13:34.303 "traddr": "10.0.0.2", 00:13:34.303 "trsvcid": "4420", 00:13:34.303 "trtype": "TCP" 00:13:34.303 }, 00:13:34.303 "peer_address": { 00:13:34.303 "adrfam": "IPv4", 00:13:34.303 "traddr": "10.0.0.1", 00:13:34.303 "trsvcid": "38064", 00:13:34.303 "trtype": "TCP" 00:13:34.303 }, 00:13:34.303 "qid": 0, 00:13:34.303 "state": "enabled", 00:13:34.303 "thread": "nvmf_tgt_poll_group_000" 00:13:34.303 } 00:13:34.303 ]' 00:13:34.562 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.562 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.563 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.563 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.563 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.563 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.563 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.563 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.821 15:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.795 15:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.362 00:13:36.362 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.362 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.362 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.621 { 00:13:36.621 "auth": { 00:13:36.621 "dhgroup": "null", 00:13:36.621 "digest": "sha384", 00:13:36.621 "state": "completed" 00:13:36.621 }, 00:13:36.621 "cntlid": 49, 00:13:36.621 "listen_address": { 00:13:36.621 "adrfam": "IPv4", 00:13:36.621 "traddr": "10.0.0.2", 00:13:36.621 "trsvcid": "4420", 00:13:36.621 "trtype": "TCP" 00:13:36.621 }, 00:13:36.621 "peer_address": { 00:13:36.621 "adrfam": "IPv4", 00:13:36.621 "traddr": "10.0.0.1", 00:13:36.621 "trsvcid": "55586", 00:13:36.621 "trtype": "TCP" 00:13:36.621 }, 00:13:36.621 "qid": 0, 00:13:36.621 "state": "enabled", 00:13:36.621 "thread": "nvmf_tgt_poll_group_000" 00:13:36.621 } 00:13:36.621 ]' 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.621 15:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.880 15:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:13:37.817 15:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.817 15:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:37.817 15:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.817 15:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.817 15:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.817 15:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.817 15:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:37.817 15:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.114 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.379 00:13:38.379 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.379 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.379 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.639 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.639 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.639 15:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.639 15:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.639 15:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.639 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.639 { 00:13:38.639 "auth": { 00:13:38.639 "dhgroup": "null", 00:13:38.639 "digest": "sha384", 00:13:38.639 "state": "completed" 00:13:38.639 }, 00:13:38.639 "cntlid": 51, 00:13:38.639 "listen_address": { 00:13:38.639 "adrfam": "IPv4", 00:13:38.639 "traddr": "10.0.0.2", 00:13:38.639 "trsvcid": "4420", 00:13:38.639 "trtype": "TCP" 00:13:38.639 }, 00:13:38.639 "peer_address": { 00:13:38.639 "adrfam": "IPv4", 00:13:38.639 "traddr": "10.0.0.1", 00:13:38.639 "trsvcid": "55614", 00:13:38.639 "trtype": "TCP" 00:13:38.639 }, 00:13:38.639 "qid": 0, 00:13:38.639 "state": "enabled", 00:13:38.639 "thread": "nvmf_tgt_poll_group_000" 00:13:38.639 } 00:13:38.639 ]' 00:13:38.639 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.639 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.639 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.897 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:38.897 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.897 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.897 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.897 15:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.156 15:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:13:39.731 15:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.731 15:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:39.731 15:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.731 15:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.731 15:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.731 15:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:39.731 15:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:39.731 15:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.995 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.253 00:13:40.253 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.253 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:40.253 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:40.821 { 00:13:40.821 "auth": { 00:13:40.821 "dhgroup": "null", 00:13:40.821 "digest": "sha384", 00:13:40.821 "state": "completed" 00:13:40.821 }, 00:13:40.821 "cntlid": 53, 00:13:40.821 "listen_address": { 00:13:40.821 "adrfam": "IPv4", 00:13:40.821 "traddr": "10.0.0.2", 00:13:40.821 "trsvcid": "4420", 00:13:40.821 "trtype": "TCP" 00:13:40.821 }, 00:13:40.821 "peer_address": { 00:13:40.821 "adrfam": "IPv4", 00:13:40.821 "traddr": "10.0.0.1", 00:13:40.821 "trsvcid": "55640", 00:13:40.821 "trtype": "TCP" 00:13:40.821 }, 00:13:40.821 "qid": 0, 00:13:40.821 "state": "enabled", 00:13:40.821 "thread": "nvmf_tgt_poll_group_000" 00:13:40.821 } 00:13:40.821 ]' 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.821 15:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.080 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:13:41.647 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.647 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:41.647 15:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.647 15:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.647 15:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.647 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:41.648 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:41.648 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:41.906 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:13:41.906 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:41.906 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:41.906 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:41.906 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:41.906 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.907 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:13:41.907 15:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.907 15:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.907 15:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.907 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:41.907 15:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:42.474 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:42.474 { 00:13:42.474 "auth": { 00:13:42.474 "dhgroup": "null", 00:13:42.474 "digest": "sha384", 00:13:42.474 "state": "completed" 00:13:42.474 }, 00:13:42.474 "cntlid": 55, 00:13:42.474 "listen_address": { 00:13:42.474 "adrfam": "IPv4", 00:13:42.474 "traddr": "10.0.0.2", 00:13:42.474 "trsvcid": "4420", 00:13:42.474 "trtype": "TCP" 00:13:42.474 }, 00:13:42.474 "peer_address": { 00:13:42.474 "adrfam": "IPv4", 00:13:42.474 "traddr": "10.0.0.1", 00:13:42.474 "trsvcid": "55672", 00:13:42.474 "trtype": "TCP" 00:13:42.474 }, 00:13:42.474 "qid": 0, 00:13:42.474 "state": "enabled", 00:13:42.474 "thread": "nvmf_tgt_poll_group_000" 00:13:42.474 } 00:13:42.474 ]' 00:13:42.474 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:42.733 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:42.733 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:42.733 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:42.733 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:42.733 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.733 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.733 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.992 15:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:13:43.559 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.559 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:43.559 15:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.559 15:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.559 15:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.559 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:43.559 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:43.559 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:43.559 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:44.127 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:13:44.127 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.127 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:44.127 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:44.127 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:44.127 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.127 15:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.127 15:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.127 15:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.127 15:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.127 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.127 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.385 00:13:44.385 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:44.385 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:44.385 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:44.644 { 00:13:44.644 "auth": { 00:13:44.644 "dhgroup": "ffdhe2048", 00:13:44.644 "digest": "sha384", 00:13:44.644 "state": "completed" 00:13:44.644 }, 00:13:44.644 "cntlid": 57, 00:13:44.644 "listen_address": { 00:13:44.644 "adrfam": "IPv4", 00:13:44.644 "traddr": "10.0.0.2", 00:13:44.644 "trsvcid": "4420", 00:13:44.644 "trtype": "TCP" 00:13:44.644 }, 00:13:44.644 "peer_address": { 00:13:44.644 "adrfam": "IPv4", 00:13:44.644 "traddr": "10.0.0.1", 00:13:44.644 "trsvcid": "45660", 00:13:44.644 "trtype": "TCP" 00:13:44.644 }, 00:13:44.644 "qid": 0, 00:13:44.644 "state": "enabled", 00:13:44.644 "thread": "nvmf_tgt_poll_group_000" 00:13:44.644 } 00:13:44.644 ]' 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.644 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.903 15:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.838 15:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.097 15:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.097 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.097 15:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.356 00:13:46.356 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.356 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.356 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.615 { 00:13:46.615 "auth": { 00:13:46.615 "dhgroup": "ffdhe2048", 00:13:46.615 "digest": "sha384", 00:13:46.615 "state": "completed" 00:13:46.615 }, 00:13:46.615 "cntlid": 59, 00:13:46.615 "listen_address": { 00:13:46.615 "adrfam": "IPv4", 00:13:46.615 "traddr": "10.0.0.2", 00:13:46.615 "trsvcid": "4420", 00:13:46.615 "trtype": "TCP" 00:13:46.615 }, 00:13:46.615 "peer_address": { 00:13:46.615 "adrfam": "IPv4", 00:13:46.615 "traddr": "10.0.0.1", 00:13:46.615 "trsvcid": "45674", 00:13:46.615 "trtype": "TCP" 00:13:46.615 }, 00:13:46.615 "qid": 0, 00:13:46.615 "state": "enabled", 00:13:46.615 "thread": "nvmf_tgt_poll_group_000" 00:13:46.615 } 00:13:46.615 ]' 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.615 15:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.183 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:13:47.751 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.751 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:47.751 15:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.751 15:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.751 15:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.751 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.751 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:47.751 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.010 15:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.284 00:13:48.284 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.284 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.284 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.546 { 00:13:48.546 "auth": { 00:13:48.546 "dhgroup": "ffdhe2048", 00:13:48.546 "digest": "sha384", 00:13:48.546 "state": "completed" 00:13:48.546 }, 00:13:48.546 "cntlid": 61, 00:13:48.546 "listen_address": { 00:13:48.546 "adrfam": "IPv4", 00:13:48.546 "traddr": "10.0.0.2", 00:13:48.546 "trsvcid": "4420", 00:13:48.546 "trtype": "TCP" 00:13:48.546 }, 00:13:48.546 "peer_address": { 00:13:48.546 "adrfam": "IPv4", 00:13:48.546 "traddr": "10.0.0.1", 00:13:48.546 "trsvcid": "45706", 00:13:48.546 "trtype": "TCP" 00:13:48.546 }, 00:13:48.546 "qid": 0, 00:13:48.546 "state": "enabled", 00:13:48.546 "thread": "nvmf_tgt_poll_group_000" 00:13:48.546 } 00:13:48.546 ]' 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.546 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.804 15:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:49.738 15:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:50.307 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.307 { 00:13:50.307 "auth": { 00:13:50.307 "dhgroup": "ffdhe2048", 00:13:50.307 "digest": "sha384", 00:13:50.307 "state": "completed" 00:13:50.307 }, 00:13:50.307 "cntlid": 63, 00:13:50.307 "listen_address": { 00:13:50.307 "adrfam": "IPv4", 00:13:50.307 "traddr": "10.0.0.2", 00:13:50.307 "trsvcid": "4420", 00:13:50.307 "trtype": "TCP" 00:13:50.307 }, 00:13:50.307 "peer_address": { 00:13:50.307 "adrfam": "IPv4", 00:13:50.307 "traddr": "10.0.0.1", 00:13:50.307 "trsvcid": "45720", 00:13:50.307 "trtype": "TCP" 00:13:50.307 }, 00:13:50.307 "qid": 0, 00:13:50.307 "state": "enabled", 00:13:50.307 "thread": "nvmf_tgt_poll_group_000" 00:13:50.307 } 00:13:50.307 ]' 00:13:50.307 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.566 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:50.566 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.566 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:50.566 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.566 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.566 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.566 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.824 15:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:13:51.393 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.393 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:51.393 15:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.393 15:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.652 15:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.652 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:51.652 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.652 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:51.652 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.911 15:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.170 00:13:52.170 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.170 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.170 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.429 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.429 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.429 15:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.429 15:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.429 15:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.429 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.429 { 00:13:52.429 "auth": { 00:13:52.429 "dhgroup": "ffdhe3072", 00:13:52.429 "digest": "sha384", 00:13:52.429 "state": "completed" 00:13:52.429 }, 00:13:52.429 "cntlid": 65, 00:13:52.429 "listen_address": { 00:13:52.429 "adrfam": "IPv4", 00:13:52.429 "traddr": "10.0.0.2", 00:13:52.429 "trsvcid": "4420", 00:13:52.429 "trtype": "TCP" 00:13:52.429 }, 00:13:52.429 "peer_address": { 00:13:52.429 "adrfam": "IPv4", 00:13:52.429 "traddr": "10.0.0.1", 00:13:52.429 "trsvcid": "45756", 00:13:52.429 "trtype": "TCP" 00:13:52.429 }, 00:13:52.429 "qid": 0, 00:13:52.429 "state": "enabled", 00:13:52.429 "thread": "nvmf_tgt_poll_group_000" 00:13:52.429 } 00:13:52.429 ]' 00:13:52.429 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.688 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:52.688 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.688 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:52.688 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.688 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.688 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.688 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.953 15:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:13:53.533 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.533 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:53.533 15:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.533 15:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.533 15:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.533 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:53.533 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:53.533 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.792 15:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.359 00:13:54.359 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.359 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.359 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:54.617 { 00:13:54.617 "auth": { 00:13:54.617 "dhgroup": "ffdhe3072", 00:13:54.617 "digest": "sha384", 00:13:54.617 "state": "completed" 00:13:54.617 }, 00:13:54.617 "cntlid": 67, 00:13:54.617 "listen_address": { 00:13:54.617 "adrfam": "IPv4", 00:13:54.617 "traddr": "10.0.0.2", 00:13:54.617 "trsvcid": "4420", 00:13:54.617 "trtype": "TCP" 00:13:54.617 }, 00:13:54.617 "peer_address": { 00:13:54.617 "adrfam": "IPv4", 00:13:54.617 "traddr": "10.0.0.1", 00:13:54.617 "trsvcid": "45782", 00:13:54.617 "trtype": "TCP" 00:13:54.617 }, 00:13:54.617 "qid": 0, 00:13:54.617 "state": "enabled", 00:13:54.617 "thread": "nvmf_tgt_poll_group_000" 00:13:54.617 } 00:13:54.617 ]' 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.617 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.875 15:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:13:55.808 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.808 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:55.808 15:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.808 15:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.808 15:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.808 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.808 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:55.808 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.066 15:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.324 00:13:56.324 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.324 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.324 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.581 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.581 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.581 15:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.581 15:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.581 15:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.581 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.581 { 00:13:56.581 "auth": { 00:13:56.581 "dhgroup": "ffdhe3072", 00:13:56.581 "digest": "sha384", 00:13:56.581 "state": "completed" 00:13:56.581 }, 00:13:56.581 "cntlid": 69, 00:13:56.581 "listen_address": { 00:13:56.581 "adrfam": "IPv4", 00:13:56.581 "traddr": "10.0.0.2", 00:13:56.581 "trsvcid": "4420", 00:13:56.581 "trtype": "TCP" 00:13:56.581 }, 00:13:56.581 "peer_address": { 00:13:56.581 "adrfam": "IPv4", 00:13:56.581 "traddr": "10.0.0.1", 00:13:56.581 "trsvcid": "49556", 00:13:56.581 "trtype": "TCP" 00:13:56.581 }, 00:13:56.581 "qid": 0, 00:13:56.581 "state": "enabled", 00:13:56.581 "thread": "nvmf_tgt_poll_group_000" 00:13:56.581 } 00:13:56.581 ]' 00:13:56.581 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.581 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.581 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.839 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:56.839 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.839 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.839 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.839 15:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.096 15:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:13:57.663 15:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.663 15:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:57.663 15:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.663 15:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.663 15:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.663 15:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.663 15:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:57.663 15:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:57.921 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:58.488 00:13:58.488 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.488 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.488 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.746 { 00:13:58.746 "auth": { 00:13:58.746 "dhgroup": "ffdhe3072", 00:13:58.746 "digest": "sha384", 00:13:58.746 "state": "completed" 00:13:58.746 }, 00:13:58.746 "cntlid": 71, 00:13:58.746 "listen_address": { 00:13:58.746 "adrfam": "IPv4", 00:13:58.746 "traddr": "10.0.0.2", 00:13:58.746 "trsvcid": "4420", 00:13:58.746 "trtype": "TCP" 00:13:58.746 }, 00:13:58.746 "peer_address": { 00:13:58.746 "adrfam": "IPv4", 00:13:58.746 "traddr": "10.0.0.1", 00:13:58.746 "trsvcid": "49588", 00:13:58.746 "trtype": "TCP" 00:13:58.746 }, 00:13:58.746 "qid": 0, 00:13:58.746 "state": "enabled", 00:13:58.746 "thread": "nvmf_tgt_poll_group_000" 00:13:58.746 } 00:13:58.746 ]' 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.746 15:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.004 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:13:59.571 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.571 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:13:59.571 15:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.571 15:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.571 15:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.571 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:59.571 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.571 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:59.571 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:59.830 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.831 15:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.397 00:14:00.397 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.397 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.397 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.655 { 00:14:00.655 "auth": { 00:14:00.655 "dhgroup": "ffdhe4096", 00:14:00.655 "digest": "sha384", 00:14:00.655 "state": "completed" 00:14:00.655 }, 00:14:00.655 "cntlid": 73, 00:14:00.655 "listen_address": { 00:14:00.655 "adrfam": "IPv4", 00:14:00.655 "traddr": "10.0.0.2", 00:14:00.655 "trsvcid": "4420", 00:14:00.655 "trtype": "TCP" 00:14:00.655 }, 00:14:00.655 "peer_address": { 00:14:00.655 "adrfam": "IPv4", 00:14:00.655 "traddr": "10.0.0.1", 00:14:00.655 "trsvcid": "49612", 00:14:00.655 "trtype": "TCP" 00:14:00.655 }, 00:14:00.655 "qid": 0, 00:14:00.655 "state": "enabled", 00:14:00.655 "thread": "nvmf_tgt_poll_group_000" 00:14:00.655 } 00:14:00.655 ]' 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.655 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.913 15:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:14:01.478 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.478 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:01.478 15:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.478 15:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.478 15:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.478 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.478 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:01.478 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:01.736 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:14:01.736 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.736 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:01.736 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:01.736 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:01.736 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.736 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.736 15:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.736 15:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.995 15:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.995 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.995 15:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.254 00:14:02.254 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.254 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.254 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.514 { 00:14:02.514 "auth": { 00:14:02.514 "dhgroup": "ffdhe4096", 00:14:02.514 "digest": "sha384", 00:14:02.514 "state": "completed" 00:14:02.514 }, 00:14:02.514 "cntlid": 75, 00:14:02.514 "listen_address": { 00:14:02.514 "adrfam": "IPv4", 00:14:02.514 "traddr": "10.0.0.2", 00:14:02.514 "trsvcid": "4420", 00:14:02.514 "trtype": "TCP" 00:14:02.514 }, 00:14:02.514 "peer_address": { 00:14:02.514 "adrfam": "IPv4", 00:14:02.514 "traddr": "10.0.0.1", 00:14:02.514 "trsvcid": "49628", 00:14:02.514 "trtype": "TCP" 00:14:02.514 }, 00:14:02.514 "qid": 0, 00:14:02.514 "state": "enabled", 00:14:02.514 "thread": "nvmf_tgt_poll_group_000" 00:14:02.514 } 00:14:02.514 ]' 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:02.514 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.773 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.773 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.773 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.031 15:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:14:03.599 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.599 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:03.599 15:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.599 15:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.599 15:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.599 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.599 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.599 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.858 15:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:04.425 00:14:04.425 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.425 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.425 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.425 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.425 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.425 15:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.425 15:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.683 15:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.683 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.683 { 00:14:04.683 "auth": { 00:14:04.683 "dhgroup": "ffdhe4096", 00:14:04.683 "digest": "sha384", 00:14:04.683 "state": "completed" 00:14:04.683 }, 00:14:04.683 "cntlid": 77, 00:14:04.683 "listen_address": { 00:14:04.683 "adrfam": "IPv4", 00:14:04.684 "traddr": "10.0.0.2", 00:14:04.684 "trsvcid": "4420", 00:14:04.684 "trtype": "TCP" 00:14:04.684 }, 00:14:04.684 "peer_address": { 00:14:04.684 "adrfam": "IPv4", 00:14:04.684 "traddr": "10.0.0.1", 00:14:04.684 "trsvcid": "49638", 00:14:04.684 "trtype": "TCP" 00:14:04.684 }, 00:14:04.684 "qid": 0, 00:14:04.684 "state": "enabled", 00:14:04.684 "thread": "nvmf_tgt_poll_group_000" 00:14:04.684 } 00:14:04.684 ]' 00:14:04.684 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.684 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.684 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.684 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:04.684 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.684 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.684 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.684 15:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.942 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:14:05.878 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.878 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:05.878 15:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:05.879 15:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:06.445 00:14:06.445 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.445 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.445 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.704 { 00:14:06.704 "auth": { 00:14:06.704 "dhgroup": "ffdhe4096", 00:14:06.704 "digest": "sha384", 00:14:06.704 "state": "completed" 00:14:06.704 }, 00:14:06.704 "cntlid": 79, 00:14:06.704 "listen_address": { 00:14:06.704 "adrfam": "IPv4", 00:14:06.704 "traddr": "10.0.0.2", 00:14:06.704 "trsvcid": "4420", 00:14:06.704 "trtype": "TCP" 00:14:06.704 }, 00:14:06.704 "peer_address": { 00:14:06.704 "adrfam": "IPv4", 00:14:06.704 "traddr": "10.0.0.1", 00:14:06.704 "trsvcid": "32960", 00:14:06.704 "trtype": "TCP" 00:14:06.704 }, 00:14:06.704 "qid": 0, 00:14:06.704 "state": "enabled", 00:14:06.704 "thread": "nvmf_tgt_poll_group_000" 00:14:06.704 } 00:14:06.704 ]' 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.704 15:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.271 15:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:14:07.862 15:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.862 15:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:07.862 15:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.862 15:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.862 15:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.862 15:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:07.862 15:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.862 15:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:07.862 15:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.121 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.379 00:14:08.379 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.379 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.379 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.636 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.636 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.636 15:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.636 15:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.636 15:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.636 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.636 { 00:14:08.636 "auth": { 00:14:08.636 "dhgroup": "ffdhe6144", 00:14:08.636 "digest": "sha384", 00:14:08.636 "state": "completed" 00:14:08.636 }, 00:14:08.636 "cntlid": 81, 00:14:08.636 "listen_address": { 00:14:08.636 "adrfam": "IPv4", 00:14:08.636 "traddr": "10.0.0.2", 00:14:08.636 "trsvcid": "4420", 00:14:08.636 "trtype": "TCP" 00:14:08.636 }, 00:14:08.636 "peer_address": { 00:14:08.636 "adrfam": "IPv4", 00:14:08.636 "traddr": "10.0.0.1", 00:14:08.636 "trsvcid": "32984", 00:14:08.636 "trtype": "TCP" 00:14:08.636 }, 00:14:08.636 "qid": 0, 00:14:08.636 "state": "enabled", 00:14:08.636 "thread": "nvmf_tgt_poll_group_000" 00:14:08.636 } 00:14:08.636 ]' 00:14:08.636 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.894 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.894 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.894 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:08.894 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.894 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.894 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.894 15:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.152 15:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:14:09.719 15:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.719 15:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:09.719 15:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.719 15:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.719 15:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.719 15:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.719 15:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:09.719 15:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.977 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.545 00:14:10.545 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.545 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.545 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.804 { 00:14:10.804 "auth": { 00:14:10.804 "dhgroup": "ffdhe6144", 00:14:10.804 "digest": "sha384", 00:14:10.804 "state": "completed" 00:14:10.804 }, 00:14:10.804 "cntlid": 83, 00:14:10.804 "listen_address": { 00:14:10.804 "adrfam": "IPv4", 00:14:10.804 "traddr": "10.0.0.2", 00:14:10.804 "trsvcid": "4420", 00:14:10.804 "trtype": "TCP" 00:14:10.804 }, 00:14:10.804 "peer_address": { 00:14:10.804 "adrfam": "IPv4", 00:14:10.804 "traddr": "10.0.0.1", 00:14:10.804 "trsvcid": "33014", 00:14:10.804 "trtype": "TCP" 00:14:10.804 }, 00:14:10.804 "qid": 0, 00:14:10.804 "state": "enabled", 00:14:10.804 "thread": "nvmf_tgt_poll_group_000" 00:14:10.804 } 00:14:10.804 ]' 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:10.804 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.062 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.062 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.062 15:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.322 15:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:14:11.889 15:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.889 15:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:11.889 15:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.889 15:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.889 15:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.889 15:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.889 15:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:11.889 15:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.147 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.406 00:14:12.665 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.665 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.665 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.924 { 00:14:12.924 "auth": { 00:14:12.924 "dhgroup": "ffdhe6144", 00:14:12.924 "digest": "sha384", 00:14:12.924 "state": "completed" 00:14:12.924 }, 00:14:12.924 "cntlid": 85, 00:14:12.924 "listen_address": { 00:14:12.924 "adrfam": "IPv4", 00:14:12.924 "traddr": "10.0.0.2", 00:14:12.924 "trsvcid": "4420", 00:14:12.924 "trtype": "TCP" 00:14:12.924 }, 00:14:12.924 "peer_address": { 00:14:12.924 "adrfam": "IPv4", 00:14:12.924 "traddr": "10.0.0.1", 00:14:12.924 "trsvcid": "33044", 00:14:12.924 "trtype": "TCP" 00:14:12.924 }, 00:14:12.924 "qid": 0, 00:14:12.924 "state": "enabled", 00:14:12.924 "thread": "nvmf_tgt_poll_group_000" 00:14:12.924 } 00:14:12.924 ]' 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.924 15:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.182 15:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:14:14.117 15:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.117 15:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:14.117 15:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.117 15:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.117 15:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.117 15:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.117 15:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:14.117 15:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.375 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.942 00:14:14.942 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.942 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.942 15:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.942 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.942 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.942 15:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.942 15:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.200 { 00:14:15.200 "auth": { 00:14:15.200 "dhgroup": "ffdhe6144", 00:14:15.200 "digest": "sha384", 00:14:15.200 "state": "completed" 00:14:15.200 }, 00:14:15.200 "cntlid": 87, 00:14:15.200 "listen_address": { 00:14:15.200 "adrfam": "IPv4", 00:14:15.200 "traddr": "10.0.0.2", 00:14:15.200 "trsvcid": "4420", 00:14:15.200 "trtype": "TCP" 00:14:15.200 }, 00:14:15.200 "peer_address": { 00:14:15.200 "adrfam": "IPv4", 00:14:15.200 "traddr": "10.0.0.1", 00:14:15.200 "trsvcid": "50456", 00:14:15.200 "trtype": "TCP" 00:14:15.200 }, 00:14:15.200 "qid": 0, 00:14:15.200 "state": "enabled", 00:14:15.200 "thread": "nvmf_tgt_poll_group_000" 00:14:15.200 } 00:14:15.200 ]' 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.200 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.458 15:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:14:16.390 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.390 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:16.390 15:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.390 15:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.390 15:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.390 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.390 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.390 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:16.390 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.648 15:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.216 00:14:17.216 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.216 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.216 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.475 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.475 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.475 15:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.475 15:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.475 15:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.475 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.475 { 00:14:17.475 "auth": { 00:14:17.475 "dhgroup": "ffdhe8192", 00:14:17.475 "digest": "sha384", 00:14:17.475 "state": "completed" 00:14:17.475 }, 00:14:17.475 "cntlid": 89, 00:14:17.475 "listen_address": { 00:14:17.475 "adrfam": "IPv4", 00:14:17.475 "traddr": "10.0.0.2", 00:14:17.475 "trsvcid": "4420", 00:14:17.475 "trtype": "TCP" 00:14:17.475 }, 00:14:17.475 "peer_address": { 00:14:17.475 "adrfam": "IPv4", 00:14:17.475 "traddr": "10.0.0.1", 00:14:17.475 "trsvcid": "50474", 00:14:17.475 "trtype": "TCP" 00:14:17.475 }, 00:14:17.475 "qid": 0, 00:14:17.475 "state": "enabled", 00:14:17.475 "thread": "nvmf_tgt_poll_group_000" 00:14:17.475 } 00:14:17.475 ]' 00:14:17.475 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.475 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.475 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.733 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:17.733 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.733 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.733 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.733 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.992 15:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:14:18.561 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.561 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:18.561 15:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.561 15:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.561 15:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.561 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.561 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:18.561 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.129 15:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.697 00:14:19.697 15:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.697 15:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.697 15:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.955 15:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.955 15:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.955 15:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.955 15:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.955 15:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.955 15:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.955 { 00:14:19.955 "auth": { 00:14:19.955 "dhgroup": "ffdhe8192", 00:14:19.955 "digest": "sha384", 00:14:19.955 "state": "completed" 00:14:19.955 }, 00:14:19.955 "cntlid": 91, 00:14:19.955 "listen_address": { 00:14:19.955 "adrfam": "IPv4", 00:14:19.955 "traddr": "10.0.0.2", 00:14:19.955 "trsvcid": "4420", 00:14:19.955 "trtype": "TCP" 00:14:19.955 }, 00:14:19.955 "peer_address": { 00:14:19.955 "adrfam": "IPv4", 00:14:19.955 "traddr": "10.0.0.1", 00:14:19.955 "trsvcid": "50502", 00:14:19.955 "trtype": "TCP" 00:14:19.955 }, 00:14:19.955 "qid": 0, 00:14:19.955 "state": "enabled", 00:14:19.955 "thread": "nvmf_tgt_poll_group_000" 00:14:19.955 } 00:14:19.955 ]' 00:14:19.955 15:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.955 15:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.955 15:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.955 15:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:19.955 15:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.214 15:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.214 15:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.214 15:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.472 15:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:14:21.039 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.039 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:21.039 15:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.039 15:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.039 15:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.039 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.039 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:21.039 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.298 15:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.233 00:14:22.233 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.233 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.233 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.492 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.493 { 00:14:22.493 "auth": { 00:14:22.493 "dhgroup": "ffdhe8192", 00:14:22.493 "digest": "sha384", 00:14:22.493 "state": "completed" 00:14:22.493 }, 00:14:22.493 "cntlid": 93, 00:14:22.493 "listen_address": { 00:14:22.493 "adrfam": "IPv4", 00:14:22.493 "traddr": "10.0.0.2", 00:14:22.493 "trsvcid": "4420", 00:14:22.493 "trtype": "TCP" 00:14:22.493 }, 00:14:22.493 "peer_address": { 00:14:22.493 "adrfam": "IPv4", 00:14:22.493 "traddr": "10.0.0.1", 00:14:22.493 "trsvcid": "50536", 00:14:22.493 "trtype": "TCP" 00:14:22.493 }, 00:14:22.493 "qid": 0, 00:14:22.493 "state": "enabled", 00:14:22.493 "thread": "nvmf_tgt_poll_group_000" 00:14:22.493 } 00:14:22.493 ]' 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.493 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.752 15:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:14:23.687 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.687 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:23.687 15:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.687 15:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.687 15:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.687 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.687 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.687 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:23.977 15:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.574 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.574 { 00:14:24.574 "auth": { 00:14:24.574 "dhgroup": "ffdhe8192", 00:14:24.574 "digest": "sha384", 00:14:24.574 "state": "completed" 00:14:24.574 }, 00:14:24.574 "cntlid": 95, 00:14:24.574 "listen_address": { 00:14:24.574 "adrfam": "IPv4", 00:14:24.574 "traddr": "10.0.0.2", 00:14:24.574 "trsvcid": "4420", 00:14:24.574 "trtype": "TCP" 00:14:24.574 }, 00:14:24.574 "peer_address": { 00:14:24.574 "adrfam": "IPv4", 00:14:24.574 "traddr": "10.0.0.1", 00:14:24.574 "trsvcid": "50570", 00:14:24.574 "trtype": "TCP" 00:14:24.574 }, 00:14:24.574 "qid": 0, 00:14:24.574 "state": "enabled", 00:14:24.574 "thread": "nvmf_tgt_poll_group_000" 00:14:24.574 } 00:14:24.574 ]' 00:14:24.574 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.833 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.833 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.833 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.833 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.833 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.833 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.833 15:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.091 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:25.659 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.917 15:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.176 00:14:26.176 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.176 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.176 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.436 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.436 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.436 15:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.436 15:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.436 15:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.436 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.436 { 00:14:26.436 "auth": { 00:14:26.436 "dhgroup": "null", 00:14:26.436 "digest": "sha512", 00:14:26.436 "state": "completed" 00:14:26.436 }, 00:14:26.436 "cntlid": 97, 00:14:26.436 "listen_address": { 00:14:26.436 "adrfam": "IPv4", 00:14:26.436 "traddr": "10.0.0.2", 00:14:26.436 "trsvcid": "4420", 00:14:26.436 "trtype": "TCP" 00:14:26.436 }, 00:14:26.436 "peer_address": { 00:14:26.436 "adrfam": "IPv4", 00:14:26.436 "traddr": "10.0.0.1", 00:14:26.436 "trsvcid": "55828", 00:14:26.436 "trtype": "TCP" 00:14:26.436 }, 00:14:26.436 "qid": 0, 00:14:26.436 "state": "enabled", 00:14:26.436 "thread": "nvmf_tgt_poll_group_000" 00:14:26.436 } 00:14:26.436 ]' 00:14:26.436 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.436 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:26.436 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.695 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:26.695 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.695 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.695 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.695 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.953 15:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:14:27.522 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.522 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:27.522 15:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.522 15:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.522 15:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.522 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.522 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:27.522 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.780 15:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.039 00:14:28.039 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.039 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.039 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.313 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.313 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.313 15:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.313 15:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.313 15:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.313 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.313 { 00:14:28.313 "auth": { 00:14:28.313 "dhgroup": "null", 00:14:28.313 "digest": "sha512", 00:14:28.313 "state": "completed" 00:14:28.313 }, 00:14:28.313 "cntlid": 99, 00:14:28.313 "listen_address": { 00:14:28.313 "adrfam": "IPv4", 00:14:28.313 "traddr": "10.0.0.2", 00:14:28.313 "trsvcid": "4420", 00:14:28.313 "trtype": "TCP" 00:14:28.313 }, 00:14:28.313 "peer_address": { 00:14:28.313 "adrfam": "IPv4", 00:14:28.313 "traddr": "10.0.0.1", 00:14:28.313 "trsvcid": "55852", 00:14:28.313 "trtype": "TCP" 00:14:28.313 }, 00:14:28.313 "qid": 0, 00:14:28.313 "state": "enabled", 00:14:28.313 "thread": "nvmf_tgt_poll_group_000" 00:14:28.313 } 00:14:28.313 ]' 00:14:28.313 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.580 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:28.580 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.580 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:28.580 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.580 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.580 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.580 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.837 15:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:14:29.402 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.402 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:29.402 15:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.402 15:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.660 15:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.660 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.660 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:29.660 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.918 15:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.177 00:14:30.177 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.177 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.177 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.436 { 00:14:30.436 "auth": { 00:14:30.436 "dhgroup": "null", 00:14:30.436 "digest": "sha512", 00:14:30.436 "state": "completed" 00:14:30.436 }, 00:14:30.436 "cntlid": 101, 00:14:30.436 "listen_address": { 00:14:30.436 "adrfam": "IPv4", 00:14:30.436 "traddr": "10.0.0.2", 00:14:30.436 "trsvcid": "4420", 00:14:30.436 "trtype": "TCP" 00:14:30.436 }, 00:14:30.436 "peer_address": { 00:14:30.436 "adrfam": "IPv4", 00:14:30.436 "traddr": "10.0.0.1", 00:14:30.436 "trsvcid": "55874", 00:14:30.436 "trtype": "TCP" 00:14:30.436 }, 00:14:30.436 "qid": 0, 00:14:30.436 "state": "enabled", 00:14:30.436 "thread": "nvmf_tgt_poll_group_000" 00:14:30.436 } 00:14:30.436 ]' 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:30.436 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.694 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.694 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.695 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.953 15:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:14:31.521 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.521 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:31.521 15:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.521 15:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.521 15:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.521 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.521 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:31.521 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.779 15:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.038 00:14:32.038 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.038 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.038 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.296 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.296 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.296 15:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.296 15:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.296 15:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.296 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.296 { 00:14:32.296 "auth": { 00:14:32.296 "dhgroup": "null", 00:14:32.296 "digest": "sha512", 00:14:32.296 "state": "completed" 00:14:32.296 }, 00:14:32.297 "cntlid": 103, 00:14:32.297 "listen_address": { 00:14:32.297 "adrfam": "IPv4", 00:14:32.297 "traddr": "10.0.0.2", 00:14:32.297 "trsvcid": "4420", 00:14:32.297 "trtype": "TCP" 00:14:32.297 }, 00:14:32.297 "peer_address": { 00:14:32.297 "adrfam": "IPv4", 00:14:32.297 "traddr": "10.0.0.1", 00:14:32.297 "trsvcid": "55894", 00:14:32.297 "trtype": "TCP" 00:14:32.297 }, 00:14:32.297 "qid": 0, 00:14:32.297 "state": "enabled", 00:14:32.297 "thread": "nvmf_tgt_poll_group_000" 00:14:32.297 } 00:14:32.297 ]' 00:14:32.297 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.297 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.297 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.297 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:32.297 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.555 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.555 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.555 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.813 15:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:14:33.379 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.379 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:33.379 15:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.379 15:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.379 15:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.379 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.379 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.379 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:33.379 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.637 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.895 00:14:33.895 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.895 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.895 15:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.155 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.155 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.155 15:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.155 15:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.155 15:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.155 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.155 { 00:14:34.155 "auth": { 00:14:34.155 "dhgroup": "ffdhe2048", 00:14:34.155 "digest": "sha512", 00:14:34.155 "state": "completed" 00:14:34.155 }, 00:14:34.155 "cntlid": 105, 00:14:34.155 "listen_address": { 00:14:34.155 "adrfam": "IPv4", 00:14:34.155 "traddr": "10.0.0.2", 00:14:34.155 "trsvcid": "4420", 00:14:34.155 "trtype": "TCP" 00:14:34.155 }, 00:14:34.155 "peer_address": { 00:14:34.155 "adrfam": "IPv4", 00:14:34.155 "traddr": "10.0.0.1", 00:14:34.155 "trsvcid": "55924", 00:14:34.155 "trtype": "TCP" 00:14:34.155 }, 00:14:34.155 "qid": 0, 00:14:34.155 "state": "enabled", 00:14:34.155 "thread": "nvmf_tgt_poll_group_000" 00:14:34.155 } 00:14:34.155 ]' 00:14:34.155 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.155 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:34.155 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.414 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:34.414 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.414 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.414 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.414 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.674 15:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:14:35.240 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.240 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:35.240 15:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.240 15:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.240 15:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.240 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.240 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:35.240 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.807 15:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.065 00:14:36.065 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.065 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.065 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.324 { 00:14:36.324 "auth": { 00:14:36.324 "dhgroup": "ffdhe2048", 00:14:36.324 "digest": "sha512", 00:14:36.324 "state": "completed" 00:14:36.324 }, 00:14:36.324 "cntlid": 107, 00:14:36.324 "listen_address": { 00:14:36.324 "adrfam": "IPv4", 00:14:36.324 "traddr": "10.0.0.2", 00:14:36.324 "trsvcid": "4420", 00:14:36.324 "trtype": "TCP" 00:14:36.324 }, 00:14:36.324 "peer_address": { 00:14:36.324 "adrfam": "IPv4", 00:14:36.324 "traddr": "10.0.0.1", 00:14:36.324 "trsvcid": "36890", 00:14:36.324 "trtype": "TCP" 00:14:36.324 }, 00:14:36.324 "qid": 0, 00:14:36.324 "state": "enabled", 00:14:36.324 "thread": "nvmf_tgt_poll_group_000" 00:14:36.324 } 00:14:36.324 ]' 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:36.324 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.582 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.582 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.582 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.840 15:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:14:37.407 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.407 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:37.407 15:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.407 15:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.407 15:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.407 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.407 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:37.407 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.665 15:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.923 00:14:38.181 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.181 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.181 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.438 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.439 { 00:14:38.439 "auth": { 00:14:38.439 "dhgroup": "ffdhe2048", 00:14:38.439 "digest": "sha512", 00:14:38.439 "state": "completed" 00:14:38.439 }, 00:14:38.439 "cntlid": 109, 00:14:38.439 "listen_address": { 00:14:38.439 "adrfam": "IPv4", 00:14:38.439 "traddr": "10.0.0.2", 00:14:38.439 "trsvcid": "4420", 00:14:38.439 "trtype": "TCP" 00:14:38.439 }, 00:14:38.439 "peer_address": { 00:14:38.439 "adrfam": "IPv4", 00:14:38.439 "traddr": "10.0.0.1", 00:14:38.439 "trsvcid": "36918", 00:14:38.439 "trtype": "TCP" 00:14:38.439 }, 00:14:38.439 "qid": 0, 00:14:38.439 "state": "enabled", 00:14:38.439 "thread": "nvmf_tgt_poll_group_000" 00:14:38.439 } 00:14:38.439 ]' 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.439 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.697 15:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:14:39.264 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.264 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:39.264 15:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.264 15:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.264 15:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.264 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.264 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.264 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.524 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:40.093 00:14:40.093 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.093 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.094 15:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.353 { 00:14:40.353 "auth": { 00:14:40.353 "dhgroup": "ffdhe2048", 00:14:40.353 "digest": "sha512", 00:14:40.353 "state": "completed" 00:14:40.353 }, 00:14:40.353 "cntlid": 111, 00:14:40.353 "listen_address": { 00:14:40.353 "adrfam": "IPv4", 00:14:40.353 "traddr": "10.0.0.2", 00:14:40.353 "trsvcid": "4420", 00:14:40.353 "trtype": "TCP" 00:14:40.353 }, 00:14:40.353 "peer_address": { 00:14:40.353 "adrfam": "IPv4", 00:14:40.353 "traddr": "10.0.0.1", 00:14:40.353 "trsvcid": "36942", 00:14:40.353 "trtype": "TCP" 00:14:40.353 }, 00:14:40.353 "qid": 0, 00:14:40.353 "state": "enabled", 00:14:40.353 "thread": "nvmf_tgt_poll_group_000" 00:14:40.353 } 00:14:40.353 ]' 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.353 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.612 15:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:14:41.549 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.549 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:41.549 15:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.549 15:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.549 15:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.549 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.549 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.549 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.550 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.809 00:14:41.809 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.809 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.809 15:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.068 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.068 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.068 15:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.068 15:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.068 15:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.068 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.068 { 00:14:42.068 "auth": { 00:14:42.068 "dhgroup": "ffdhe3072", 00:14:42.068 "digest": "sha512", 00:14:42.068 "state": "completed" 00:14:42.068 }, 00:14:42.068 "cntlid": 113, 00:14:42.068 "listen_address": { 00:14:42.068 "adrfam": "IPv4", 00:14:42.068 "traddr": "10.0.0.2", 00:14:42.068 "trsvcid": "4420", 00:14:42.068 "trtype": "TCP" 00:14:42.068 }, 00:14:42.068 "peer_address": { 00:14:42.068 "adrfam": "IPv4", 00:14:42.068 "traddr": "10.0.0.1", 00:14:42.068 "trsvcid": "36972", 00:14:42.068 "trtype": "TCP" 00:14:42.068 }, 00:14:42.068 "qid": 0, 00:14:42.068 "state": "enabled", 00:14:42.068 "thread": "nvmf_tgt_poll_group_000" 00:14:42.068 } 00:14:42.068 ]' 00:14:42.068 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.326 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.327 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.327 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:42.327 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.327 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.327 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.327 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.586 15:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:14:43.153 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.153 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:43.153 15:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.153 15:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.153 15:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.153 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.153 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.153 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.427 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.739 00:14:43.739 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.739 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.739 15:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.000 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.000 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.000 15:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.000 15:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.000 15:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.000 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.000 { 00:14:44.000 "auth": { 00:14:44.000 "dhgroup": "ffdhe3072", 00:14:44.000 "digest": "sha512", 00:14:44.000 "state": "completed" 00:14:44.000 }, 00:14:44.000 "cntlid": 115, 00:14:44.000 "listen_address": { 00:14:44.000 "adrfam": "IPv4", 00:14:44.000 "traddr": "10.0.0.2", 00:14:44.000 "trsvcid": "4420", 00:14:44.000 "trtype": "TCP" 00:14:44.000 }, 00:14:44.000 "peer_address": { 00:14:44.000 "adrfam": "IPv4", 00:14:44.000 "traddr": "10.0.0.1", 00:14:44.000 "trsvcid": "36992", 00:14:44.000 "trtype": "TCP" 00:14:44.000 }, 00:14:44.000 "qid": 0, 00:14:44.000 "state": "enabled", 00:14:44.000 "thread": "nvmf_tgt_poll_group_000" 00:14:44.000 } 00:14:44.000 ]' 00:14:44.000 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.259 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.259 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.259 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.259 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.259 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.259 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.259 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.517 15:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:14:45.453 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.453 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:45.453 15:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.453 15:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.453 15:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.453 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.453 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.454 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.713 00:14:45.972 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.972 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.972 15:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.231 { 00:14:46.231 "auth": { 00:14:46.231 "dhgroup": "ffdhe3072", 00:14:46.231 "digest": "sha512", 00:14:46.231 "state": "completed" 00:14:46.231 }, 00:14:46.231 "cntlid": 117, 00:14:46.231 "listen_address": { 00:14:46.231 "adrfam": "IPv4", 00:14:46.231 "traddr": "10.0.0.2", 00:14:46.231 "trsvcid": "4420", 00:14:46.231 "trtype": "TCP" 00:14:46.231 }, 00:14:46.231 "peer_address": { 00:14:46.231 "adrfam": "IPv4", 00:14:46.231 "traddr": "10.0.0.1", 00:14:46.231 "trsvcid": "57836", 00:14:46.231 "trtype": "TCP" 00:14:46.231 }, 00:14:46.231 "qid": 0, 00:14:46.231 "state": "enabled", 00:14:46.231 "thread": "nvmf_tgt_poll_group_000" 00:14:46.231 } 00:14:46.231 ]' 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.231 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.490 15:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:14:47.058 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.058 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:47.058 15:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.058 15:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.058 15:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.058 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.058 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.058 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.317 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.886 00:14:47.886 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.886 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.886 15:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.143 { 00:14:48.143 "auth": { 00:14:48.143 "dhgroup": "ffdhe3072", 00:14:48.143 "digest": "sha512", 00:14:48.143 "state": "completed" 00:14:48.143 }, 00:14:48.143 "cntlid": 119, 00:14:48.143 "listen_address": { 00:14:48.143 "adrfam": "IPv4", 00:14:48.143 "traddr": "10.0.0.2", 00:14:48.143 "trsvcid": "4420", 00:14:48.143 "trtype": "TCP" 00:14:48.143 }, 00:14:48.143 "peer_address": { 00:14:48.143 "adrfam": "IPv4", 00:14:48.143 "traddr": "10.0.0.1", 00:14:48.143 "trsvcid": "57874", 00:14:48.143 "trtype": "TCP" 00:14:48.143 }, 00:14:48.143 "qid": 0, 00:14:48.143 "state": "enabled", 00:14:48.143 "thread": "nvmf_tgt_poll_group_000" 00:14:48.143 } 00:14:48.143 ]' 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.143 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.401 15:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:14:48.969 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.969 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:48.969 15:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.969 15:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 15:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.969 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.969 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.969 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:48.969 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.537 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.796 00:14:49.796 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.796 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.796 15:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.055 { 00:14:50.055 "auth": { 00:14:50.055 "dhgroup": "ffdhe4096", 00:14:50.055 "digest": "sha512", 00:14:50.055 "state": "completed" 00:14:50.055 }, 00:14:50.055 "cntlid": 121, 00:14:50.055 "listen_address": { 00:14:50.055 "adrfam": "IPv4", 00:14:50.055 "traddr": "10.0.0.2", 00:14:50.055 "trsvcid": "4420", 00:14:50.055 "trtype": "TCP" 00:14:50.055 }, 00:14:50.055 "peer_address": { 00:14:50.055 "adrfam": "IPv4", 00:14:50.055 "traddr": "10.0.0.1", 00:14:50.055 "trsvcid": "57910", 00:14:50.055 "trtype": "TCP" 00:14:50.055 }, 00:14:50.055 "qid": 0, 00:14:50.055 "state": "enabled", 00:14:50.055 "thread": "nvmf_tgt_poll_group_000" 00:14:50.055 } 00:14:50.055 ]' 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.055 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.314 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.314 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.314 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.314 15:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:14:51.250 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.250 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:51.250 15:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.250 15:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.250 15:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.250 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.250 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.250 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.509 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.768 00:14:51.768 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.768 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.768 15:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.026 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.026 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.026 15:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.026 15:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.026 15:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.026 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.026 { 00:14:52.026 "auth": { 00:14:52.026 "dhgroup": "ffdhe4096", 00:14:52.026 "digest": "sha512", 00:14:52.026 "state": "completed" 00:14:52.026 }, 00:14:52.026 "cntlid": 123, 00:14:52.026 "listen_address": { 00:14:52.026 "adrfam": "IPv4", 00:14:52.026 "traddr": "10.0.0.2", 00:14:52.026 "trsvcid": "4420", 00:14:52.026 "trtype": "TCP" 00:14:52.026 }, 00:14:52.026 "peer_address": { 00:14:52.026 "adrfam": "IPv4", 00:14:52.026 "traddr": "10.0.0.1", 00:14:52.026 "trsvcid": "57934", 00:14:52.026 "trtype": "TCP" 00:14:52.026 }, 00:14:52.026 "qid": 0, 00:14:52.026 "state": "enabled", 00:14:52.026 "thread": "nvmf_tgt_poll_group_000" 00:14:52.026 } 00:14:52.026 ]' 00:14:52.026 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.026 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.026 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.285 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.285 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.285 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.285 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.285 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.544 15:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:14:53.112 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.112 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:53.112 15:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.112 15:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.112 15:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.112 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.112 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.112 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.372 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.940 00:14:53.940 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.940 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.940 15:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.200 { 00:14:54.200 "auth": { 00:14:54.200 "dhgroup": "ffdhe4096", 00:14:54.200 "digest": "sha512", 00:14:54.200 "state": "completed" 00:14:54.200 }, 00:14:54.200 "cntlid": 125, 00:14:54.200 "listen_address": { 00:14:54.200 "adrfam": "IPv4", 00:14:54.200 "traddr": "10.0.0.2", 00:14:54.200 "trsvcid": "4420", 00:14:54.200 "trtype": "TCP" 00:14:54.200 }, 00:14:54.200 "peer_address": { 00:14:54.200 "adrfam": "IPv4", 00:14:54.200 "traddr": "10.0.0.1", 00:14:54.200 "trsvcid": "57968", 00:14:54.200 "trtype": "TCP" 00:14:54.200 }, 00:14:54.200 "qid": 0, 00:14:54.200 "state": "enabled", 00:14:54.200 "thread": "nvmf_tgt_poll_group_000" 00:14:54.200 } 00:14:54.200 ]' 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.200 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.459 15:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:14:55.395 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.395 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:55.395 15:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.395 15:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.395 15:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.395 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.395 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:55.395 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:55.654 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:14:55.654 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.654 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:55.655 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:55.655 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.655 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.655 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:14:55.655 15:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.655 15:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.655 15:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.655 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.655 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.914 00:14:55.914 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.914 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.914 15:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.173 { 00:14:56.173 "auth": { 00:14:56.173 "dhgroup": "ffdhe4096", 00:14:56.173 "digest": "sha512", 00:14:56.173 "state": "completed" 00:14:56.173 }, 00:14:56.173 "cntlid": 127, 00:14:56.173 "listen_address": { 00:14:56.173 "adrfam": "IPv4", 00:14:56.173 "traddr": "10.0.0.2", 00:14:56.173 "trsvcid": "4420", 00:14:56.173 "trtype": "TCP" 00:14:56.173 }, 00:14:56.173 "peer_address": { 00:14:56.173 "adrfam": "IPv4", 00:14:56.173 "traddr": "10.0.0.1", 00:14:56.173 "trsvcid": "36940", 00:14:56.173 "trtype": "TCP" 00:14:56.173 }, 00:14:56.173 "qid": 0, 00:14:56.173 "state": "enabled", 00:14:56.173 "thread": "nvmf_tgt_poll_group_000" 00:14:56.173 } 00:14:56.173 ]' 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.173 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.740 15:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:14:57.308 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.308 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:57.308 15:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.308 15:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.308 15:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.308 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.308 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.308 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:57.308 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.568 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.827 00:14:58.086 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.086 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.086 15:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.086 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.086 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.086 15:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.086 15:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.086 15:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.086 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.086 { 00:14:58.086 "auth": { 00:14:58.086 "dhgroup": "ffdhe6144", 00:14:58.086 "digest": "sha512", 00:14:58.086 "state": "completed" 00:14:58.086 }, 00:14:58.086 "cntlid": 129, 00:14:58.086 "listen_address": { 00:14:58.086 "adrfam": "IPv4", 00:14:58.086 "traddr": "10.0.0.2", 00:14:58.086 "trsvcid": "4420", 00:14:58.086 "trtype": "TCP" 00:14:58.086 }, 00:14:58.086 "peer_address": { 00:14:58.086 "adrfam": "IPv4", 00:14:58.086 "traddr": "10.0.0.1", 00:14:58.086 "trsvcid": "36974", 00:14:58.086 "trtype": "TCP" 00:14:58.086 }, 00:14:58.086 "qid": 0, 00:14:58.086 "state": "enabled", 00:14:58.087 "thread": "nvmf_tgt_poll_group_000" 00:14:58.087 } 00:14:58.087 ]' 00:14:58.087 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.344 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.344 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.344 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:58.344 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.344 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.344 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.344 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.603 15:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:14:59.538 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.538 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:14:59.538 15:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.538 15:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.539 15:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.106 00:15:00.106 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.106 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.106 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.365 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.365 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.365 15:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.365 15:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.365 15:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.365 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.365 { 00:15:00.365 "auth": { 00:15:00.365 "dhgroup": "ffdhe6144", 00:15:00.365 "digest": "sha512", 00:15:00.365 "state": "completed" 00:15:00.365 }, 00:15:00.365 "cntlid": 131, 00:15:00.365 "listen_address": { 00:15:00.365 "adrfam": "IPv4", 00:15:00.365 "traddr": "10.0.0.2", 00:15:00.365 "trsvcid": "4420", 00:15:00.365 "trtype": "TCP" 00:15:00.365 }, 00:15:00.365 "peer_address": { 00:15:00.365 "adrfam": "IPv4", 00:15:00.365 "traddr": "10.0.0.1", 00:15:00.365 "trsvcid": "36996", 00:15:00.365 "trtype": "TCP" 00:15:00.365 }, 00:15:00.365 "qid": 0, 00:15:00.365 "state": "enabled", 00:15:00.365 "thread": "nvmf_tgt_poll_group_000" 00:15:00.365 } 00:15:00.365 ]' 00:15:00.365 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.365 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.365 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.624 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:00.624 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.624 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.624 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.624 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.883 15:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:15:01.449 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.449 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:01.449 15:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.449 15:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.449 15:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.449 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.449 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:01.449 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.015 15:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.273 00:15:02.273 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.273 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.273 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.532 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.532 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.532 15:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.532 15:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.532 15:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.532 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.532 { 00:15:02.532 "auth": { 00:15:02.532 "dhgroup": "ffdhe6144", 00:15:02.532 "digest": "sha512", 00:15:02.532 "state": "completed" 00:15:02.532 }, 00:15:02.532 "cntlid": 133, 00:15:02.532 "listen_address": { 00:15:02.532 "adrfam": "IPv4", 00:15:02.532 "traddr": "10.0.0.2", 00:15:02.532 "trsvcid": "4420", 00:15:02.532 "trtype": "TCP" 00:15:02.532 }, 00:15:02.532 "peer_address": { 00:15:02.532 "adrfam": "IPv4", 00:15:02.532 "traddr": "10.0.0.1", 00:15:02.532 "trsvcid": "37016", 00:15:02.532 "trtype": "TCP" 00:15:02.532 }, 00:15:02.532 "qid": 0, 00:15:02.532 "state": "enabled", 00:15:02.532 "thread": "nvmf_tgt_poll_group_000" 00:15:02.532 } 00:15:02.532 ]' 00:15:02.532 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.532 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.790 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.790 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:02.790 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.790 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.790 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.790 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.048 15:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:15:03.613 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.613 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:03.613 15:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.613 15:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.613 15:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.613 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.613 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:03.613 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:03.871 15:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.435 00:15:04.435 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.435 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.435 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.691 { 00:15:04.691 "auth": { 00:15:04.691 "dhgroup": "ffdhe6144", 00:15:04.691 "digest": "sha512", 00:15:04.691 "state": "completed" 00:15:04.691 }, 00:15:04.691 "cntlid": 135, 00:15:04.691 "listen_address": { 00:15:04.691 "adrfam": "IPv4", 00:15:04.691 "traddr": "10.0.0.2", 00:15:04.691 "trsvcid": "4420", 00:15:04.691 "trtype": "TCP" 00:15:04.691 }, 00:15:04.691 "peer_address": { 00:15:04.691 "adrfam": "IPv4", 00:15:04.691 "traddr": "10.0.0.1", 00:15:04.691 "trsvcid": "37038", 00:15:04.691 "trtype": "TCP" 00:15:04.691 }, 00:15:04.691 "qid": 0, 00:15:04.691 "state": "enabled", 00:15:04.691 "thread": "nvmf_tgt_poll_group_000" 00:15:04.691 } 00:15:04.691 ]' 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.691 15:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.960 15:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:15:05.898 15:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.898 15:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:05.898 15:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.898 15:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.898 15:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.898 15:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:05.898 15:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.898 15:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:05.898 15:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.898 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.864 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.864 { 00:15:06.864 "auth": { 00:15:06.864 "dhgroup": "ffdhe8192", 00:15:06.864 "digest": "sha512", 00:15:06.864 "state": "completed" 00:15:06.864 }, 00:15:06.864 "cntlid": 137, 00:15:06.864 "listen_address": { 00:15:06.864 "adrfam": "IPv4", 00:15:06.864 "traddr": "10.0.0.2", 00:15:06.864 "trsvcid": "4420", 00:15:06.864 "trtype": "TCP" 00:15:06.864 }, 00:15:06.864 "peer_address": { 00:15:06.864 "adrfam": "IPv4", 00:15:06.864 "traddr": "10.0.0.1", 00:15:06.864 "trsvcid": "41540", 00:15:06.864 "trtype": "TCP" 00:15:06.864 }, 00:15:06.864 "qid": 0, 00:15:06.864 "state": "enabled", 00:15:06.864 "thread": "nvmf_tgt_poll_group_000" 00:15:06.864 } 00:15:06.864 ]' 00:15:06.864 15:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.122 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.122 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.122 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:07.122 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.122 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.122 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.122 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.381 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:15:07.947 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.947 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:07.947 15:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.947 15:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.947 15:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.947 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.947 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:07.947 15:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:08.205 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.206 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.771 00:15:08.771 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.771 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.771 15:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.338 { 00:15:09.338 "auth": { 00:15:09.338 "dhgroup": "ffdhe8192", 00:15:09.338 "digest": "sha512", 00:15:09.338 "state": "completed" 00:15:09.338 }, 00:15:09.338 "cntlid": 139, 00:15:09.338 "listen_address": { 00:15:09.338 "adrfam": "IPv4", 00:15:09.338 "traddr": "10.0.0.2", 00:15:09.338 "trsvcid": "4420", 00:15:09.338 "trtype": "TCP" 00:15:09.338 }, 00:15:09.338 "peer_address": { 00:15:09.338 "adrfam": "IPv4", 00:15:09.338 "traddr": "10.0.0.1", 00:15:09.338 "trsvcid": "41576", 00:15:09.338 "trtype": "TCP" 00:15:09.338 }, 00:15:09.338 "qid": 0, 00:15:09.338 "state": "enabled", 00:15:09.338 "thread": "nvmf_tgt_poll_group_000" 00:15:09.338 } 00:15:09.338 ]' 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.338 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.597 15:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:01:MGYwODRlMGM1MmNhM2YzNWY1MGZjNWU5NzE2ZmFmODG9/VRG: --dhchap-ctrl-secret DHHC-1:02:NzBhMjVkOTFhYzY2YzM1MDVlODhlZjVhYjAzMGQ5OWMxMDg5NjBkMTdlYjUyZWRiFFr3ZQ==: 00:15:10.165 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.165 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:10.165 15:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.165 15:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.165 15:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.165 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.165 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.165 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.424 15:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.993 00:15:10.993 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.993 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.993 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.561 { 00:15:11.561 "auth": { 00:15:11.561 "dhgroup": "ffdhe8192", 00:15:11.561 "digest": "sha512", 00:15:11.561 "state": "completed" 00:15:11.561 }, 00:15:11.561 "cntlid": 141, 00:15:11.561 "listen_address": { 00:15:11.561 "adrfam": "IPv4", 00:15:11.561 "traddr": "10.0.0.2", 00:15:11.561 "trsvcid": "4420", 00:15:11.561 "trtype": "TCP" 00:15:11.561 }, 00:15:11.561 "peer_address": { 00:15:11.561 "adrfam": "IPv4", 00:15:11.561 "traddr": "10.0.0.1", 00:15:11.561 "trsvcid": "41610", 00:15:11.561 "trtype": "TCP" 00:15:11.561 }, 00:15:11.561 "qid": 0, 00:15:11.561 "state": "enabled", 00:15:11.561 "thread": "nvmf_tgt_poll_group_000" 00:15:11.561 } 00:15:11.561 ]' 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.561 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.820 15:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:02:NzM4NzMwODE0ZWU3ZWJhYzI1MGYzMDBlN2QzMGMwMDUzOTYxMDI1ZTA4OTk2Yjg07C5QBA==: --dhchap-ctrl-secret DHHC-1:01:Mzg4MDJhMGRmMWZiNTBiY2Y0Mjg5YzdlMTM3M2I2NzTLQemy: 00:15:12.388 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.388 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:12.388 15:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.388 15:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.388 15:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.388 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.388 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.388 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.647 15:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.216 00:15:13.216 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.216 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.216 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.475 { 00:15:13.475 "auth": { 00:15:13.475 "dhgroup": "ffdhe8192", 00:15:13.475 "digest": "sha512", 00:15:13.475 "state": "completed" 00:15:13.475 }, 00:15:13.475 "cntlid": 143, 00:15:13.475 "listen_address": { 00:15:13.475 "adrfam": "IPv4", 00:15:13.475 "traddr": "10.0.0.2", 00:15:13.475 "trsvcid": "4420", 00:15:13.475 "trtype": "TCP" 00:15:13.475 }, 00:15:13.475 "peer_address": { 00:15:13.475 "adrfam": "IPv4", 00:15:13.475 "traddr": "10.0.0.1", 00:15:13.475 "trsvcid": "41644", 00:15:13.475 "trtype": "TCP" 00:15:13.475 }, 00:15:13.475 "qid": 0, 00:15:13.475 "state": "enabled", 00:15:13.475 "thread": "nvmf_tgt_poll_group_000" 00:15:13.475 } 00:15:13.475 ]' 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:13.475 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.735 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.735 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.735 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.993 15:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:14.560 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.819 15:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.387 00:15:15.387 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.387 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.387 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.646 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.646 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.646 15:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.646 15:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.646 15:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.646 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.646 { 00:15:15.646 "auth": { 00:15:15.646 "dhgroup": "ffdhe8192", 00:15:15.646 "digest": "sha512", 00:15:15.646 "state": "completed" 00:15:15.646 }, 00:15:15.646 "cntlid": 145, 00:15:15.646 "listen_address": { 00:15:15.646 "adrfam": "IPv4", 00:15:15.646 "traddr": "10.0.0.2", 00:15:15.646 "trsvcid": "4420", 00:15:15.646 "trtype": "TCP" 00:15:15.646 }, 00:15:15.646 "peer_address": { 00:15:15.646 "adrfam": "IPv4", 00:15:15.646 "traddr": "10.0.0.1", 00:15:15.646 "trsvcid": "60188", 00:15:15.646 "trtype": "TCP" 00:15:15.646 }, 00:15:15.646 "qid": 0, 00:15:15.646 "state": "enabled", 00:15:15.646 "thread": "nvmf_tgt_poll_group_000" 00:15:15.646 } 00:15:15.646 ]' 00:15:15.646 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.646 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.646 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.905 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:15.905 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.905 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.905 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.905 15:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.164 15:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:00:ODFiMzE2OTQ2NjA4ZTg2NTBlZDFmZjcxNWU3NTI3ODlkNDNkNjdlMDkzZjc2ZDc0DEPscQ==: --dhchap-ctrl-secret DHHC-1:03:N2NlZTA0N2NlYTg2YThjOWIwYzZlMGFmMjcxNjJkMGY4ZGVlNjY4Mjc2YjgzYTJkMWYzYWNiNzliM2I5ZjU1ZC17J24=: 00:15:16.732 15:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:16.733 15:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:17.670 2024/07/15 15:38:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:17.670 request: 00:15:17.670 { 00:15:17.670 "method": "bdev_nvme_attach_controller", 00:15:17.670 "params": { 00:15:17.670 "name": "nvme0", 00:15:17.670 "trtype": "tcp", 00:15:17.670 "traddr": "10.0.0.2", 00:15:17.670 "adrfam": "ipv4", 00:15:17.670 "trsvcid": "4420", 00:15:17.670 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:17.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb", 00:15:17.670 "prchk_reftag": false, 00:15:17.670 "prchk_guard": false, 00:15:17.670 "hdgst": false, 00:15:17.670 "ddgst": false, 00:15:17.670 "dhchap_key": "key2" 00:15:17.670 } 00:15:17.670 } 00:15:17.670 Got JSON-RPC error response 00:15:17.670 GoRPCClient: error on JSON-RPC call 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:17.670 15:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:17.930 2024/07/15 15:38:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:17.930 request: 00:15:17.930 { 00:15:17.930 "method": "bdev_nvme_attach_controller", 00:15:17.930 "params": { 00:15:17.930 "name": "nvme0", 00:15:17.930 "trtype": "tcp", 00:15:17.930 "traddr": "10.0.0.2", 00:15:17.930 "adrfam": "ipv4", 00:15:17.930 "trsvcid": "4420", 00:15:17.930 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:17.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb", 00:15:17.930 "prchk_reftag": false, 00:15:17.930 "prchk_guard": false, 00:15:17.930 "hdgst": false, 00:15:17.930 "ddgst": false, 00:15:17.930 "dhchap_key": "key1", 00:15:17.930 "dhchap_ctrlr_key": "ckey2" 00:15:17.930 } 00:15:17.930 } 00:15:17.930 Got JSON-RPC error response 00:15:17.930 GoRPCClient: error on JSON-RPC call 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key1 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.214 15:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.785 2024/07/15 15:38:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:18.785 request: 00:15:18.785 { 00:15:18.785 "method": "bdev_nvme_attach_controller", 00:15:18.785 "params": { 00:15:18.785 "name": "nvme0", 00:15:18.785 "trtype": "tcp", 00:15:18.785 "traddr": "10.0.0.2", 00:15:18.785 "adrfam": "ipv4", 00:15:18.785 "trsvcid": "4420", 00:15:18.785 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:18.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb", 00:15:18.785 "prchk_reftag": false, 00:15:18.785 "prchk_guard": false, 00:15:18.785 "hdgst": false, 00:15:18.785 "ddgst": false, 00:15:18.785 "dhchap_key": "key1", 00:15:18.785 "dhchap_ctrlr_key": "ckey1" 00:15:18.785 } 00:15:18.785 } 00:15:18.785 Got JSON-RPC error response 00:15:18.785 GoRPCClient: error on JSON-RPC call 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77684 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77684 ']' 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77684 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77684 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:18.785 killing process with pid 77684 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77684' 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77684 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77684 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82534 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82534 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82534 ']' 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.785 15:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82534 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82534 ']' 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.162 15:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.162 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.731 00:15:20.731 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.731 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.731 15:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.990 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.990 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.990 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.990 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.990 15:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.990 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.990 { 00:15:20.990 "auth": { 00:15:20.990 "dhgroup": "ffdhe8192", 00:15:20.990 "digest": "sha512", 00:15:20.990 "state": "completed" 00:15:20.990 }, 00:15:20.990 "cntlid": 1, 00:15:20.990 "listen_address": { 00:15:20.990 "adrfam": "IPv4", 00:15:20.990 "traddr": "10.0.0.2", 00:15:20.990 "trsvcid": "4420", 00:15:20.990 "trtype": "TCP" 00:15:20.990 }, 00:15:20.990 "peer_address": { 00:15:20.990 "adrfam": "IPv4", 00:15:20.990 "traddr": "10.0.0.1", 00:15:20.990 "trsvcid": "60240", 00:15:20.990 "trtype": "TCP" 00:15:20.990 }, 00:15:20.990 "qid": 0, 00:15:20.990 "state": "enabled", 00:15:20.990 "thread": "nvmf_tgt_poll_group_000" 00:15:20.990 } 00:15:20.990 ]' 00:15:20.990 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.990 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.990 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.250 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.250 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.250 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.250 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.250 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.510 15:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-secret DHHC-1:03:MjZhYTZiMmJlNjE3MjY4NzczYWRjNDZjYzAzNDM5NWEzNWY1NDFlYzgxNDA4NzU1ZjBmY2RlZjgyZmZmNDVhYXEkYHg=: 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --dhchap-key key3 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:22.077 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:22.335 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.335 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:22.335 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.335 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:22.335 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:22.335 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:22.335 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:22.335 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.335 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.594 2024/07/15 15:38:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:22.594 request: 00:15:22.594 { 00:15:22.594 "method": "bdev_nvme_attach_controller", 00:15:22.594 "params": { 00:15:22.594 "name": "nvme0", 00:15:22.594 "trtype": "tcp", 00:15:22.594 "traddr": "10.0.0.2", 00:15:22.594 "adrfam": "ipv4", 00:15:22.594 "trsvcid": "4420", 00:15:22.594 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:22.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb", 00:15:22.594 "prchk_reftag": false, 00:15:22.594 "prchk_guard": false, 00:15:22.594 "hdgst": false, 00:15:22.594 "ddgst": false, 00:15:22.594 "dhchap_key": "key3" 00:15:22.594 } 00:15:22.594 } 00:15:22.594 Got JSON-RPC error response 00:15:22.594 GoRPCClient: error on JSON-RPC call 00:15:22.594 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:22.594 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:22.594 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:22.594 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:22.594 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:15:22.594 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:15:22.595 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:22.595 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:22.853 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.853 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:22.853 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.853 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:22.853 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:22.853 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:22.853 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:22.853 15:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.853 15:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.111 2024/07/15 15:38:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:23.111 request: 00:15:23.111 { 00:15:23.111 "method": "bdev_nvme_attach_controller", 00:15:23.111 "params": { 00:15:23.111 "name": "nvme0", 00:15:23.111 "trtype": "tcp", 00:15:23.111 "traddr": "10.0.0.2", 00:15:23.111 "adrfam": "ipv4", 00:15:23.111 "trsvcid": "4420", 00:15:23.111 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb", 00:15:23.111 "prchk_reftag": false, 00:15:23.111 "prchk_guard": false, 00:15:23.111 "hdgst": false, 00:15:23.111 "ddgst": false, 00:15:23.111 "dhchap_key": "key3" 00:15:23.111 } 00:15:23.111 } 00:15:23.111 Got JSON-RPC error response 00:15:23.111 GoRPCClient: error on JSON-RPC call 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.111 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.368 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:23.368 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.368 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.368 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.369 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.627 2024/07/15 15:38:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:23.627 request: 00:15:23.627 { 00:15:23.627 "method": "bdev_nvme_attach_controller", 00:15:23.627 "params": { 00:15:23.627 "name": "nvme0", 00:15:23.627 "trtype": "tcp", 00:15:23.627 "traddr": "10.0.0.2", 00:15:23.627 "adrfam": "ipv4", 00:15:23.627 "trsvcid": "4420", 00:15:23.627 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb", 00:15:23.627 "prchk_reftag": false, 00:15:23.627 "prchk_guard": false, 00:15:23.627 "hdgst": false, 00:15:23.627 "ddgst": false, 00:15:23.627 "dhchap_key": "key0", 00:15:23.627 "dhchap_ctrlr_key": "key1" 00:15:23.627 } 00:15:23.627 } 00:15:23.627 Got JSON-RPC error response 00:15:23.627 GoRPCClient: error on JSON-RPC call 00:15:23.627 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:23.627 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.627 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.627 15:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.627 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:23.627 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:23.884 00:15:23.884 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:15:23.884 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.884 15:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:15:24.142 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.142 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.142 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77728 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77728 ']' 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77728 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77728 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:24.400 killing process with pid 77728 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77728' 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77728 00:15:24.400 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77728 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:24.658 rmmod nvme_tcp 00:15:24.658 rmmod nvme_fabrics 00:15:24.658 rmmod nvme_keyring 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82534 ']' 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82534 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82534 ']' 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82534 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82534 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:24.658 killing process with pid 82534 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82534' 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82534 00:15:24.658 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82534 00:15:24.916 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:24.916 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:24.916 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:24.916 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.916 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:24.916 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.916 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.916 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.917 15:38:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:24.917 15:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ttb /tmp/spdk.key-sha256.dJB /tmp/spdk.key-sha384.7My /tmp/spdk.key-sha512.DRX /tmp/spdk.key-sha512.NtD /tmp/spdk.key-sha384.PRq /tmp/spdk.key-sha256.t3F '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:24.917 00:15:24.917 real 2m47.347s 00:15:24.917 user 6m47.409s 00:15:24.917 sys 0m20.700s 00:15:24.917 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:24.917 15:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.917 ************************************ 00:15:24.917 END TEST nvmf_auth_target 00:15:24.917 ************************************ 00:15:24.917 15:38:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:24.917 15:38:19 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:15:24.917 15:38:19 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:24.917 15:38:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:24.917 15:38:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.917 15:38:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:24.917 ************************************ 00:15:24.917 START TEST nvmf_bdevio_no_huge 00:15:24.917 ************************************ 00:15:24.917 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:25.175 * Looking for test storage... 00:15:25.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:25.175 Cannot find device "nvmf_tgt_br" 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.175 Cannot find device "nvmf_tgt_br2" 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:25.175 Cannot find device "nvmf_tgt_br" 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:25.175 Cannot find device "nvmf_tgt_br2" 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.175 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:25.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:15:25.432 00:15:25.432 --- 10.0.0.2 ping statistics --- 00:15:25.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.432 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:25.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:15:25.432 00:15:25.432 --- 10.0.0.3 ping statistics --- 00:15:25.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.432 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:25.432 00:15:25.432 --- 10.0.0.1 ping statistics --- 00:15:25.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.432 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.432 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=82927 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 82927 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 82927 ']' 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.433 15:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.690 [2024-07-15 15:38:20.566721] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:15:25.690 [2024-07-15 15:38:20.567761] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:25.690 [2024-07-15 15:38:20.723385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.948 [2024-07-15 15:38:20.855868] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.948 [2024-07-15 15:38:20.855940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.948 [2024-07-15 15:38:20.855966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.948 [2024-07-15 15:38:20.855976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.948 [2024-07-15 15:38:20.855985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.948 [2024-07-15 15:38:20.856127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:25.948 [2024-07-15 15:38:20.856283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:25.948 [2024-07-15 15:38:20.856418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:25.948 [2024-07-15 15:38:20.856843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.513 [2024-07-15 15:38:21.601194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:26.513 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.514 Malloc0 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.514 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.514 [2024-07-15 15:38:21.642367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:26.771 { 00:15:26.771 "params": { 00:15:26.771 "name": "Nvme$subsystem", 00:15:26.771 "trtype": "$TEST_TRANSPORT", 00:15:26.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:26.771 "adrfam": "ipv4", 00:15:26.771 "trsvcid": "$NVMF_PORT", 00:15:26.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:26.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:26.771 "hdgst": ${hdgst:-false}, 00:15:26.771 "ddgst": ${ddgst:-false} 00:15:26.771 }, 00:15:26.771 "method": "bdev_nvme_attach_controller" 00:15:26.771 } 00:15:26.771 EOF 00:15:26.771 )") 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:15:26.771 15:38:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:26.771 "params": { 00:15:26.771 "name": "Nvme1", 00:15:26.771 "trtype": "tcp", 00:15:26.771 "traddr": "10.0.0.2", 00:15:26.771 "adrfam": "ipv4", 00:15:26.771 "trsvcid": "4420", 00:15:26.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:26.771 "hdgst": false, 00:15:26.771 "ddgst": false 00:15:26.771 }, 00:15:26.771 "method": "bdev_nvme_attach_controller" 00:15:26.771 }' 00:15:26.771 [2024-07-15 15:38:21.696611] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:15:26.771 [2024-07-15 15:38:21.696695] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82981 ] 00:15:26.771 [2024-07-15 15:38:21.831724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.028 [2024-07-15 15:38:21.970608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.028 [2024-07-15 15:38:21.970764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.028 [2024-07-15 15:38:21.970773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.028 I/O targets: 00:15:27.028 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:27.028 00:15:27.028 00:15:27.029 CUnit - A unit testing framework for C - Version 2.1-3 00:15:27.029 http://cunit.sourceforge.net/ 00:15:27.029 00:15:27.029 00:15:27.029 Suite: bdevio tests on: Nvme1n1 00:15:27.286 Test: blockdev write read block ...passed 00:15:27.286 Test: blockdev write zeroes read block ...passed 00:15:27.286 Test: blockdev write zeroes read no split ...passed 00:15:27.286 Test: blockdev write zeroes read split ...passed 00:15:27.286 Test: blockdev write zeroes read split partial ...passed 00:15:27.286 Test: blockdev reset ...[2024-07-15 15:38:22.270244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:27.286 [2024-07-15 15:38:22.270368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0a460 (9): Bad file descriptor 00:15:27.286 [2024-07-15 15:38:22.282146] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:27.286 passed 00:15:27.286 Test: blockdev write read 8 blocks ...passed 00:15:27.286 Test: blockdev write read size > 128k ...passed 00:15:27.286 Test: blockdev write read invalid size ...passed 00:15:27.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.286 Test: blockdev write read max offset ...passed 00:15:27.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.286 Test: blockdev writev readv 8 blocks ...passed 00:15:27.286 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.544 Test: blockdev writev readv block ...passed 00:15:27.544 Test: blockdev writev readv size > 128k ...passed 00:15:27.544 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.544 Test: blockdev comparev and writev ...[2024-07-15 15:38:22.458221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.544 [2024-07-15 15:38:22.458312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.458335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.544 [2024-07-15 15:38:22.458347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.458817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.544 [2024-07-15 15:38:22.458845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.458864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.544 [2024-07-15 15:38:22.458875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.459302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.544 [2024-07-15 15:38:22.459333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.459352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.544 [2024-07-15 15:38:22.459363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.459746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.544 [2024-07-15 15:38:22.459773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.459791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.544 [2024-07-15 15:38:22.459802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:27.544 passed 00:15:27.544 Test: blockdev nvme passthru rw ...passed 00:15:27.544 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:38:22.543901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.544 [2024-07-15 15:38:22.543930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.544082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.544 [2024-07-15 15:38:22.544099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.544217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.544 [2024-07-15 15:38:22.544233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:27.544 [2024-07-15 15:38:22.544345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.544 [2024-07-15 15:38:22.544361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:27.544 passed 00:15:27.544 Test: blockdev nvme admin passthru ...passed 00:15:27.544 Test: blockdev copy ...passed 00:15:27.544 00:15:27.544 Run Summary: Type Total Ran Passed Failed Inactive 00:15:27.544 suites 1 1 n/a 0 0 00:15:27.544 tests 23 23 23 0 0 00:15:27.544 asserts 152 152 152 0 n/a 00:15:27.544 00:15:27.544 Elapsed time = 0.916 seconds 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.110 15:38:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.110 rmmod nvme_tcp 00:15:28.110 rmmod nvme_fabrics 00:15:28.110 rmmod nvme_keyring 00:15:28.110 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.110 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:15:28.110 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:15:28.110 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 82927 ']' 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 82927 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 82927 ']' 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 82927 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82927 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:28.111 killing process with pid 82927 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82927' 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 82927 00:15:28.111 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 82927 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:28.369 00:15:28.369 real 0m3.396s 00:15:28.369 user 0m12.092s 00:15:28.369 sys 0m1.163s 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.369 15:38:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:28.369 ************************************ 00:15:28.369 END TEST nvmf_bdevio_no_huge 00:15:28.369 ************************************ 00:15:28.369 15:38:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.369 15:38:23 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:28.369 15:38:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.369 15:38:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.369 15:38:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.369 ************************************ 00:15:28.369 START TEST nvmf_tls 00:15:28.369 ************************************ 00:15:28.369 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:28.627 * Looking for test storage... 00:15:28.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.627 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:28.628 Cannot find device "nvmf_tgt_br" 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.628 Cannot find device "nvmf_tgt_br2" 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:28.628 Cannot find device "nvmf_tgt_br" 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:28.628 Cannot find device "nvmf_tgt_br2" 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:28.628 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:28.886 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:28.886 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:28.886 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:28.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:15:28.887 00:15:28.887 --- 10.0.0.2 ping statistics --- 00:15:28.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.887 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:28.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:15:28.887 00:15:28.887 --- 10.0.0.3 ping statistics --- 00:15:28.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.887 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:28.887 00:15:28.887 --- 10.0.0.1 ping statistics --- 00:15:28.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.887 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83171 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83171 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83171 ']' 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.887 15:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.887 [2024-07-15 15:38:24.008606] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:15:28.887 [2024-07-15 15:38:24.008720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.146 [2024-07-15 15:38:24.152291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.146 [2024-07-15 15:38:24.220615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.146 [2024-07-15 15:38:24.220687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.146 [2024-07-15 15:38:24.220710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.146 [2024-07-15 15:38:24.220720] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.146 [2024-07-15 15:38:24.220729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.146 [2024-07-15 15:38:24.220764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.081 15:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.081 15:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:30.081 15:38:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.081 15:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.081 15:38:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.081 15:38:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.081 15:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:30.081 15:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:30.081 true 00:15:30.081 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.081 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:30.356 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:30.356 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:30.356 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:30.663 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.663 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:30.922 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:30.922 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:30.922 15:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:31.181 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:31.181 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:31.440 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:31.440 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:31.440 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:31.440 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:31.699 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:15:31.699 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:15:31.699 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:31.957 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:31.957 15:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:15:32.216 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:15:32.216 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:15:32.216 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:32.216 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:32.216 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:32.476 15:38:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.nOaS5CbqvU 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.jjcMGt5rvw 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.nOaS5CbqvU 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.jjcMGt5rvw 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:32.735 15:38:27 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:32.994 15:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.nOaS5CbqvU 00:15:32.994 15:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nOaS5CbqvU 00:15:32.994 15:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:33.252 [2024-07-15 15:38:28.255422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.252 15:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:33.511 15:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:33.511 [2024-07-15 15:38:28.639475] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.511 [2024-07-15 15:38:28.639760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.770 15:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:33.770 malloc0 00:15:33.770 15:38:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:34.029 15:38:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nOaS5CbqvU 00:15:34.288 [2024-07-15 15:38:29.289493] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:34.288 15:38:29 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.nOaS5CbqvU 00:15:46.493 Initializing NVMe Controllers 00:15:46.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:46.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:46.493 Initialization complete. Launching workers. 00:15:46.493 ======================================================== 00:15:46.493 Latency(us) 00:15:46.493 Device Information : IOPS MiB/s Average min max 00:15:46.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11442.66 44.70 5594.45 803.26 8201.72 00:15:46.493 ======================================================== 00:15:46.493 Total : 11442.66 44.70 5594.45 803.26 8201.72 00:15:46.493 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nOaS5CbqvU 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nOaS5CbqvU' 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83521 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83521 /var/tmp/bdevperf.sock 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83521 ']' 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.493 [2024-07-15 15:38:39.515246] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:15:46.493 [2024-07-15 15:38:39.515344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83521 ] 00:15:46.493 [2024-07-15 15:38:39.650246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.493 [2024-07-15 15:38:39.718167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:46.493 15:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nOaS5CbqvU 00:15:46.493 [2024-07-15 15:38:39.982448] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:46.493 [2024-07-15 15:38:39.982765] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:46.493 TLSTESTn1 00:15:46.493 15:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:46.493 Running I/O for 10 seconds... 00:15:56.467 00:15:56.467 Latency(us) 00:15:56.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.467 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:56.467 Verification LBA range: start 0x0 length 0x2000 00:15:56.467 TLSTESTn1 : 10.02 4613.96 18.02 0.00 0.00 27689.15 547.37 18707.55 00:15:56.467 =================================================================================================================== 00:15:56.467 Total : 4613.96 18.02 0.00 0.00 27689.15 547.37 18707.55 00:15:56.467 0 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83521 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83521 ']' 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83521 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83521 00:15:56.467 killing process with pid 83521 00:15:56.467 Received shutdown signal, test time was about 10.000000 seconds 00:15:56.467 00:15:56.467 Latency(us) 00:15:56.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.467 =================================================================================================================== 00:15:56.467 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83521' 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83521 00:15:56.467 [2024-07-15 15:38:50.219666] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83521 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jjcMGt5rvw 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jjcMGt5rvw 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jjcMGt5rvw 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jjcMGt5rvw' 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83650 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83650 /var/tmp/bdevperf.sock 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83650 ']' 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.467 15:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.467 [2024-07-15 15:38:50.426226] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:15:56.467 [2024-07-15 15:38:50.426323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83650 ] 00:15:56.467 [2024-07-15 15:38:50.562665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.467 [2024-07-15 15:38:50.617861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.467 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.467 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:56.467 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jjcMGt5rvw 00:15:56.726 [2024-07-15 15:38:51.639118] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:56.726 [2024-07-15 15:38:51.639241] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:56.726 [2024-07-15 15:38:51.648295] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:56.726 [2024-07-15 15:38:51.648824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1271ca0 (107): Transport endpoint is not connected 00:15:56.726 [2024-07-15 15:38:51.649813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1271ca0 (9): Bad file descriptor 00:15:56.726 [2024-07-15 15:38:51.650810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:56.726 [2024-07-15 15:38:51.650851] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:56.726 [2024-07-15 15:38:51.650865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:56.726 2024/07/15 15:38:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.jjcMGt5rvw subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:56.726 request: 00:15:56.726 { 00:15:56.726 "method": "bdev_nvme_attach_controller", 00:15:56.726 "params": { 00:15:56.726 "name": "TLSTEST", 00:15:56.726 "trtype": "tcp", 00:15:56.726 "traddr": "10.0.0.2", 00:15:56.726 "adrfam": "ipv4", 00:15:56.726 "trsvcid": "4420", 00:15:56.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:56.726 "prchk_reftag": false, 00:15:56.726 "prchk_guard": false, 00:15:56.726 "hdgst": false, 00:15:56.726 "ddgst": false, 00:15:56.726 "psk": "/tmp/tmp.jjcMGt5rvw" 00:15:56.726 } 00:15:56.726 } 00:15:56.726 Got JSON-RPC error response 00:15:56.726 GoRPCClient: error on JSON-RPC call 00:15:56.726 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83650 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83650 ']' 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83650 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83650 00:15:56.727 killing process with pid 83650 00:15:56.727 Received shutdown signal, test time was about 10.000000 seconds 00:15:56.727 00:15:56.727 Latency(us) 00:15:56.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.727 =================================================================================================================== 00:15:56.727 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83650' 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83650 00:15:56.727 [2024-07-15 15:38:51.695361] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83650 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nOaS5CbqvU 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nOaS5CbqvU 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nOaS5CbqvU 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nOaS5CbqvU' 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83696 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83696 /var/tmp/bdevperf.sock 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83696 ']' 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.727 15:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.986 [2024-07-15 15:38:51.893412] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:15:56.986 [2024-07-15 15:38:51.893517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83696 ] 00:15:56.986 [2024-07-15 15:38:52.031933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.986 [2024-07-15 15:38:52.086272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.931 15:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.931 15:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:57.931 15:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.nOaS5CbqvU 00:15:58.205 [2024-07-15 15:38:53.094049] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:58.205 [2024-07-15 15:38:53.094167] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:58.205 [2024-07-15 15:38:53.099065] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:58.205 [2024-07-15 15:38:53.099134] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:58.205 [2024-07-15 15:38:53.099184] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:58.205 [2024-07-15 15:38:53.099780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59aca0 (107): Transport endpoint is not connected 00:15:58.205 [2024-07-15 15:38:53.100767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59aca0 (9): Bad file descriptor 00:15:58.205 [2024-07-15 15:38:53.101764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:58.205 [2024-07-15 15:38:53.101801] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:58.205 [2024-07-15 15:38:53.101813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:58.205 2024/07/15 15:38:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.nOaS5CbqvU subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:58.205 request: 00:15:58.205 { 00:15:58.205 "method": "bdev_nvme_attach_controller", 00:15:58.205 "params": { 00:15:58.205 "name": "TLSTEST", 00:15:58.205 "trtype": "tcp", 00:15:58.205 "traddr": "10.0.0.2", 00:15:58.205 "adrfam": "ipv4", 00:15:58.205 "trsvcid": "4420", 00:15:58.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.205 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:58.205 "prchk_reftag": false, 00:15:58.205 "prchk_guard": false, 00:15:58.205 "hdgst": false, 00:15:58.205 "ddgst": false, 00:15:58.205 "psk": "/tmp/tmp.nOaS5CbqvU" 00:15:58.205 } 00:15:58.205 } 00:15:58.205 Got JSON-RPC error response 00:15:58.205 GoRPCClient: error on JSON-RPC call 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83696 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83696 ']' 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83696 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83696 00:15:58.205 killing process with pid 83696 00:15:58.205 Received shutdown signal, test time was about 10.000000 seconds 00:15:58.205 00:15:58.205 Latency(us) 00:15:58.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.205 =================================================================================================================== 00:15:58.205 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83696' 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83696 00:15:58.205 [2024-07-15 15:38:53.149414] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83696 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nOaS5CbqvU 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nOaS5CbqvU 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.205 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nOaS5CbqvU 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nOaS5CbqvU' 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83740 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83740 /var/tmp/bdevperf.sock 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83740 ']' 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.206 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.464 [2024-07-15 15:38:53.341740] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:15:58.464 [2024-07-15 15:38:53.341845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83740 ] 00:15:58.464 [2024-07-15 15:38:53.475496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.464 [2024-07-15 15:38:53.527983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.722 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.722 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:58.722 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nOaS5CbqvU 00:15:58.722 [2024-07-15 15:38:53.799005] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:58.722 [2024-07-15 15:38:53.799160] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:58.722 [2024-07-15 15:38:53.808256] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:58.722 [2024-07-15 15:38:53.808309] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:58.722 [2024-07-15 15:38:53.808357] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:58.722 [2024-07-15 15:38:53.808829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1736ca0 (107): Transport endpoint is not connected 00:15:58.722 [2024-07-15 15:38:53.809818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1736ca0 (9): Bad file descriptor 00:15:58.722 [2024-07-15 15:38:53.810814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:58.722 [2024-07-15 15:38:53.810856] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:58.722 [2024-07-15 15:38:53.810871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:58.722 2024/07/15 15:38:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.nOaS5CbqvU subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:58.722 request: 00:15:58.722 { 00:15:58.722 "method": "bdev_nvme_attach_controller", 00:15:58.722 "params": { 00:15:58.722 "name": "TLSTEST", 00:15:58.722 "trtype": "tcp", 00:15:58.722 "traddr": "10.0.0.2", 00:15:58.723 "adrfam": "ipv4", 00:15:58.723 "trsvcid": "4420", 00:15:58.723 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:58.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:58.723 "prchk_reftag": false, 00:15:58.723 "prchk_guard": false, 00:15:58.723 "hdgst": false, 00:15:58.723 "ddgst": false, 00:15:58.723 "psk": "/tmp/tmp.nOaS5CbqvU" 00:15:58.723 } 00:15:58.723 } 00:15:58.723 Got JSON-RPC error response 00:15:58.723 GoRPCClient: error on JSON-RPC call 00:15:58.723 15:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83740 00:15:58.723 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83740 ']' 00:15:58.723 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83740 00:15:58.723 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:58.723 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.723 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83740 00:15:58.981 killing process with pid 83740 00:15:58.981 Received shutdown signal, test time was about 10.000000 seconds 00:15:58.981 00:15:58.981 Latency(us) 00:15:58.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.981 =================================================================================================================== 00:15:58.981 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:58.981 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:58.981 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:58.981 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83740' 00:15:58.981 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83740 00:15:58.981 [2024-07-15 15:38:53.860839] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:58.981 15:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83740 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83768 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83768 /var/tmp/bdevperf.sock 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83768 ']' 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.981 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.981 [2024-07-15 15:38:54.073735] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:15:58.981 [2024-07-15 15:38:54.073843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83768 ] 00:15:59.240 [2024-07-15 15:38:54.204851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.240 [2024-07-15 15:38:54.256677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.240 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.240 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:59.240 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:59.498 [2024-07-15 15:38:54.588451] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:59.498 [2024-07-15 15:38:54.589927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c4240 (9): Bad file descriptor 00:15:59.498 [2024-07-15 15:38:54.590909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:59.498 [2024-07-15 15:38:54.590937] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:59.498 [2024-07-15 15:38:54.590951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:59.498 2024/07/15 15:38:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:59.498 request: 00:15:59.498 { 00:15:59.498 "method": "bdev_nvme_attach_controller", 00:15:59.498 "params": { 00:15:59.498 "name": "TLSTEST", 00:15:59.498 "trtype": "tcp", 00:15:59.498 "traddr": "10.0.0.2", 00:15:59.498 "adrfam": "ipv4", 00:15:59.498 "trsvcid": "4420", 00:15:59.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.498 "prchk_reftag": false, 00:15:59.498 "prchk_guard": false, 00:15:59.498 "hdgst": false, 00:15:59.498 "ddgst": false 00:15:59.498 } 00:15:59.498 } 00:15:59.498 Got JSON-RPC error response 00:15:59.498 GoRPCClient: error on JSON-RPC call 00:15:59.498 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83768 00:15:59.498 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83768 ']' 00:15:59.498 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83768 00:15:59.498 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:59.498 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.498 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83768 00:15:59.757 killing process with pid 83768 00:15:59.757 Received shutdown signal, test time was about 10.000000 seconds 00:15:59.757 00:15:59.757 Latency(us) 00:15:59.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.757 =================================================================================================================== 00:15:59.757 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83768' 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83768 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83768 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83171 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83171 ']' 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83171 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83171 00:15:59.757 killing process with pid 83171 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83171' 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83171 00:15:59.757 [2024-07-15 15:38:54.794929] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:59.757 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83171 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.L0ReyH6Ojp 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.L0ReyH6Ojp 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.017 15:38:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83810 00:16:00.017 15:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83810 00:16:00.017 15:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:00.017 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83810 ']' 00:16:00.017 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.017 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.017 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.017 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.017 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.017 [2024-07-15 15:38:55.070543] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:00.017 [2024-07-15 15:38:55.070863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.276 [2024-07-15 15:38:55.211543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.276 [2024-07-15 15:38:55.263325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.276 [2024-07-15 15:38:55.263373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.276 [2024-07-15 15:38:55.263383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.276 [2024-07-15 15:38:55.263390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.276 [2024-07-15 15:38:55.263396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.276 [2024-07-15 15:38:55.263424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.276 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.276 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:00.276 15:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:00.276 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.276 15:38:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.276 15:38:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.276 15:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.L0ReyH6Ojp 00:16:00.276 15:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.L0ReyH6Ojp 00:16:00.276 15:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:00.535 [2024-07-15 15:38:55.634707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.535 15:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:00.794 15:38:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:01.052 [2024-07-15 15:38:56.118780] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:01.052 [2024-07-15 15:38:56.118981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.052 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:01.310 malloc0 00:16:01.310 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:01.569 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L0ReyH6Ojp 00:16:01.826 [2024-07-15 15:38:56.773246] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L0ReyH6Ojp 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.L0ReyH6Ojp' 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83898 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83898 /var/tmp/bdevperf.sock 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83898 ']' 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.826 15:38:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.826 [2024-07-15 15:38:56.834833] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:01.826 [2024-07-15 15:38:56.834921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83898 ] 00:16:02.133 [2024-07-15 15:38:56.970399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.133 [2024-07-15 15:38:57.039753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.699 15:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.699 15:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:02.699 15:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L0ReyH6Ojp 00:16:02.958 [2024-07-15 15:38:57.995415] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:02.958 [2024-07-15 15:38:57.995508] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:02.958 TLSTESTn1 00:16:03.216 15:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:03.216 Running I/O for 10 seconds... 00:16:13.212 00:16:13.212 Latency(us) 00:16:13.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.212 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:13.212 Verification LBA range: start 0x0 length 0x2000 00:16:13.212 TLSTESTn1 : 10.02 4423.73 17.28 0.00 0.00 28875.96 6553.60 27525.12 00:16:13.212 =================================================================================================================== 00:16:13.212 Total : 4423.73 17.28 0.00 0.00 28875.96 6553.60 27525.12 00:16:13.212 0 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83898 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83898 ']' 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83898 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83898 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:13.212 killing process with pid 83898 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83898' 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83898 00:16:13.212 Received shutdown signal, test time was about 10.000000 seconds 00:16:13.212 00:16:13.212 Latency(us) 00:16:13.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.212 =================================================================================================================== 00:16:13.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.212 [2024-07-15 15:39:08.272134] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:13.212 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83898 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.L0ReyH6Ojp 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L0ReyH6Ojp 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L0ReyH6Ojp 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L0ReyH6Ojp 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.L0ReyH6Ojp' 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84046 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84046 /var/tmp/bdevperf.sock 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84046 ']' 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.470 15:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.470 [2024-07-15 15:39:08.479497] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:13.470 [2024-07-15 15:39:08.479606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84046 ] 00:16:13.729 [2024-07-15 15:39:08.616046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.729 [2024-07-15 15:39:08.669191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.296 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.296 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:14.296 15:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L0ReyH6Ojp 00:16:14.554 [2024-07-15 15:39:09.621821] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:14.554 [2024-07-15 15:39:09.621947] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:14.554 [2024-07-15 15:39:09.621957] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.L0ReyH6Ojp 00:16:14.555 2024/07/15 15:39:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.L0ReyH6Ojp subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:16:14.555 request: 00:16:14.555 { 00:16:14.555 "method": "bdev_nvme_attach_controller", 00:16:14.555 "params": { 00:16:14.555 "name": "TLSTEST", 00:16:14.555 "trtype": "tcp", 00:16:14.555 "traddr": "10.0.0.2", 00:16:14.555 "adrfam": "ipv4", 00:16:14.555 "trsvcid": "4420", 00:16:14.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.555 "prchk_reftag": false, 00:16:14.555 "prchk_guard": false, 00:16:14.555 "hdgst": false, 00:16:14.555 "ddgst": false, 00:16:14.555 "psk": "/tmp/tmp.L0ReyH6Ojp" 00:16:14.555 } 00:16:14.555 } 00:16:14.555 Got JSON-RPC error response 00:16:14.555 GoRPCClient: error on JSON-RPC call 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84046 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84046 ']' 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84046 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84046 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:14.555 killing process with pid 84046 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84046' 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84046 00:16:14.555 Received shutdown signal, test time was about 10.000000 seconds 00:16:14.555 00:16:14.555 Latency(us) 00:16:14.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.555 =================================================================================================================== 00:16:14.555 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:14.555 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84046 00:16:14.813 15:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:14.813 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 83810 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83810 ']' 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83810 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83810 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:14.814 killing process with pid 83810 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83810' 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83810 00:16:14.814 [2024-07-15 15:39:09.831288] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:14.814 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83810 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84097 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84097 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84097 ']' 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.073 15:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.073 [2024-07-15 15:39:10.049239] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:15.073 [2024-07-15 15:39:10.049348] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.073 [2024-07-15 15:39:10.182475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.332 [2024-07-15 15:39:10.235222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.332 [2024-07-15 15:39:10.235286] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.332 [2024-07-15 15:39:10.235311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.332 [2024-07-15 15:39:10.235318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.332 [2024-07-15 15:39:10.235324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.332 [2024-07-15 15:39:10.235350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.L0ReyH6Ojp 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.L0ReyH6Ojp 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:15.898 15:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:15.898 15:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:15.898 15:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.L0ReyH6Ojp 00:16:15.898 15:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.L0ReyH6Ojp 00:16:15.898 15:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:16.157 [2024-07-15 15:39:11.256839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.157 15:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:16.414 15:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:16.673 [2024-07-15 15:39:11.748987] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:16.673 [2024-07-15 15:39:11.749164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.673 15:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:16.932 malloc0 00:16:16.932 15:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:17.190 15:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L0ReyH6Ojp 00:16:17.448 [2024-07-15 15:39:12.431327] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:17.448 [2024-07-15 15:39:12.431358] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:17.448 [2024-07-15 15:39:12.431386] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:17.448 2024/07/15 15:39:12 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.L0ReyH6Ojp], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:17.448 request: 00:16:17.448 { 00:16:17.448 "method": "nvmf_subsystem_add_host", 00:16:17.448 "params": { 00:16:17.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.448 "host": "nqn.2016-06.io.spdk:host1", 00:16:17.448 "psk": "/tmp/tmp.L0ReyH6Ojp" 00:16:17.448 } 00:16:17.448 } 00:16:17.448 Got JSON-RPC error response 00:16:17.448 GoRPCClient: error on JSON-RPC call 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84097 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84097 ']' 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84097 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84097 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:17.448 killing process with pid 84097 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84097' 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84097 00:16:17.448 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84097 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.L0ReyH6Ojp 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84202 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84202 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84202 ']' 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.706 15:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.706 [2024-07-15 15:39:12.698929] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:17.706 [2024-07-15 15:39:12.699022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.706 [2024-07-15 15:39:12.831267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.964 [2024-07-15 15:39:12.885886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.964 [2024-07-15 15:39:12.885949] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.964 [2024-07-15 15:39:12.885959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.964 [2024-07-15 15:39:12.885966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.964 [2024-07-15 15:39:12.885973] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.964 [2024-07-15 15:39:12.886000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.527 15:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.527 15:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:18.527 15:39:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.527 15:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.527 15:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.785 15:39:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.785 15:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.L0ReyH6Ojp 00:16:18.785 15:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.L0ReyH6Ojp 00:16:18.785 15:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:18.785 [2024-07-15 15:39:13.902085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.043 15:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:19.302 15:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:19.302 [2024-07-15 15:39:14.366163] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:19.302 [2024-07-15 15:39:14.366353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.302 15:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:19.560 malloc0 00:16:19.560 15:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:19.817 15:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L0ReyH6Ojp 00:16:20.075 [2024-07-15 15:39:15.048520] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:20.075 15:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:20.075 15:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84305 00:16:20.075 15:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:20.075 15:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84305 /var/tmp/bdevperf.sock 00:16:20.075 15:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84305 ']' 00:16:20.075 15:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:20.075 15:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:20.076 15:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:20.076 15:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.076 15:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:20.076 [2024-07-15 15:39:15.110940] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:20.076 [2024-07-15 15:39:15.111010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84305 ] 00:16:20.333 [2024-07-15 15:39:15.246637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.334 [2024-07-15 15:39:15.315700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.270 15:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.270 15:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:21.270 15:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L0ReyH6Ojp 00:16:21.270 [2024-07-15 15:39:16.275147] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:21.270 [2024-07-15 15:39:16.275240] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:21.270 TLSTESTn1 00:16:21.270 15:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:21.837 15:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:21.837 "subsystems": [ 00:16:21.837 { 00:16:21.837 "subsystem": "keyring", 00:16:21.837 "config": [] 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "subsystem": "iobuf", 00:16:21.837 "config": [ 00:16:21.837 { 00:16:21.837 "method": "iobuf_set_options", 00:16:21.837 "params": { 00:16:21.837 "large_bufsize": 135168, 00:16:21.837 "large_pool_count": 1024, 00:16:21.837 "small_bufsize": 8192, 00:16:21.837 "small_pool_count": 8192 00:16:21.837 } 00:16:21.837 } 00:16:21.837 ] 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "subsystem": "sock", 00:16:21.837 "config": [ 00:16:21.837 { 00:16:21.837 "method": "sock_set_default_impl", 00:16:21.837 "params": { 00:16:21.837 "impl_name": "posix" 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "sock_impl_set_options", 00:16:21.837 "params": { 00:16:21.837 "enable_ktls": false, 00:16:21.837 "enable_placement_id": 0, 00:16:21.837 "enable_quickack": false, 00:16:21.837 "enable_recv_pipe": true, 00:16:21.837 "enable_zerocopy_send_client": false, 00:16:21.837 "enable_zerocopy_send_server": true, 00:16:21.837 "impl_name": "ssl", 00:16:21.837 "recv_buf_size": 4096, 00:16:21.837 "send_buf_size": 4096, 00:16:21.837 "tls_version": 0, 00:16:21.837 "zerocopy_threshold": 0 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "sock_impl_set_options", 00:16:21.837 "params": { 00:16:21.837 "enable_ktls": false, 00:16:21.837 "enable_placement_id": 0, 00:16:21.837 "enable_quickack": false, 00:16:21.837 "enable_recv_pipe": true, 00:16:21.837 "enable_zerocopy_send_client": false, 00:16:21.837 "enable_zerocopy_send_server": true, 00:16:21.837 "impl_name": "posix", 00:16:21.837 "recv_buf_size": 2097152, 00:16:21.837 "send_buf_size": 2097152, 00:16:21.837 "tls_version": 0, 00:16:21.837 "zerocopy_threshold": 0 00:16:21.837 } 00:16:21.837 } 00:16:21.837 ] 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "subsystem": "vmd", 00:16:21.837 "config": [] 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "subsystem": "accel", 00:16:21.837 "config": [ 00:16:21.837 { 00:16:21.837 "method": "accel_set_options", 00:16:21.837 "params": { 00:16:21.837 "buf_count": 2048, 00:16:21.837 "large_cache_size": 16, 00:16:21.837 "sequence_count": 2048, 00:16:21.837 "small_cache_size": 128, 00:16:21.837 "task_count": 2048 00:16:21.837 } 00:16:21.837 } 00:16:21.837 ] 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "subsystem": "bdev", 00:16:21.837 "config": [ 00:16:21.837 { 00:16:21.837 "method": "bdev_set_options", 00:16:21.837 "params": { 00:16:21.837 "bdev_auto_examine": true, 00:16:21.837 "bdev_io_cache_size": 256, 00:16:21.837 "bdev_io_pool_size": 65535, 00:16:21.837 "iobuf_large_cache_size": 16, 00:16:21.837 "iobuf_small_cache_size": 128 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "bdev_raid_set_options", 00:16:21.837 "params": { 00:16:21.837 "process_window_size_kb": 1024 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "bdev_iscsi_set_options", 00:16:21.837 "params": { 00:16:21.837 "timeout_sec": 30 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "bdev_nvme_set_options", 00:16:21.837 "params": { 00:16:21.837 "action_on_timeout": "none", 00:16:21.837 "allow_accel_sequence": false, 00:16:21.837 "arbitration_burst": 0, 00:16:21.837 "bdev_retry_count": 3, 00:16:21.837 "ctrlr_loss_timeout_sec": 0, 00:16:21.837 "delay_cmd_submit": true, 00:16:21.837 "dhchap_dhgroups": [ 00:16:21.837 "null", 00:16:21.837 "ffdhe2048", 00:16:21.837 "ffdhe3072", 00:16:21.837 "ffdhe4096", 00:16:21.837 "ffdhe6144", 00:16:21.837 "ffdhe8192" 00:16:21.837 ], 00:16:21.837 "dhchap_digests": [ 00:16:21.837 "sha256", 00:16:21.837 "sha384", 00:16:21.837 "sha512" 00:16:21.837 ], 00:16:21.837 "disable_auto_failback": false, 00:16:21.837 "fast_io_fail_timeout_sec": 0, 00:16:21.837 "generate_uuids": false, 00:16:21.837 "high_priority_weight": 0, 00:16:21.837 "io_path_stat": false, 00:16:21.837 "io_queue_requests": 0, 00:16:21.837 "keep_alive_timeout_ms": 10000, 00:16:21.837 "low_priority_weight": 0, 00:16:21.837 "medium_priority_weight": 0, 00:16:21.837 "nvme_adminq_poll_period_us": 10000, 00:16:21.837 "nvme_error_stat": false, 00:16:21.837 "nvme_ioq_poll_period_us": 0, 00:16:21.837 "rdma_cm_event_timeout_ms": 0, 00:16:21.837 "rdma_max_cq_size": 0, 00:16:21.837 "rdma_srq_size": 0, 00:16:21.837 "reconnect_delay_sec": 0, 00:16:21.837 "timeout_admin_us": 0, 00:16:21.837 "timeout_us": 0, 00:16:21.837 "transport_ack_timeout": 0, 00:16:21.837 "transport_retry_count": 4, 00:16:21.837 "transport_tos": 0 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "bdev_nvme_set_hotplug", 00:16:21.837 "params": { 00:16:21.837 "enable": false, 00:16:21.837 "period_us": 100000 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "bdev_malloc_create", 00:16:21.837 "params": { 00:16:21.837 "block_size": 4096, 00:16:21.837 "name": "malloc0", 00:16:21.837 "num_blocks": 8192, 00:16:21.837 "optimal_io_boundary": 0, 00:16:21.837 "physical_block_size": 4096, 00:16:21.837 "uuid": "c66f52a1-43cb-47ae-b3fa-9f191284c11d" 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "bdev_wait_for_examine" 00:16:21.837 } 00:16:21.837 ] 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "subsystem": "nbd", 00:16:21.837 "config": [] 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "subsystem": "scheduler", 00:16:21.837 "config": [ 00:16:21.837 { 00:16:21.837 "method": "framework_set_scheduler", 00:16:21.837 "params": { 00:16:21.837 "name": "static" 00:16:21.837 } 00:16:21.837 } 00:16:21.837 ] 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "subsystem": "nvmf", 00:16:21.837 "config": [ 00:16:21.837 { 00:16:21.837 "method": "nvmf_set_config", 00:16:21.837 "params": { 00:16:21.837 "admin_cmd_passthru": { 00:16:21.837 "identify_ctrlr": false 00:16:21.837 }, 00:16:21.837 "discovery_filter": "match_any" 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "nvmf_set_max_subsystems", 00:16:21.837 "params": { 00:16:21.837 "max_subsystems": 1024 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "nvmf_set_crdt", 00:16:21.837 "params": { 00:16:21.837 "crdt1": 0, 00:16:21.837 "crdt2": 0, 00:16:21.837 "crdt3": 0 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "nvmf_create_transport", 00:16:21.837 "params": { 00:16:21.837 "abort_timeout_sec": 1, 00:16:21.837 "ack_timeout": 0, 00:16:21.837 "buf_cache_size": 4294967295, 00:16:21.837 "c2h_success": false, 00:16:21.837 "data_wr_pool_size": 0, 00:16:21.837 "dif_insert_or_strip": false, 00:16:21.837 "in_capsule_data_size": 4096, 00:16:21.837 "io_unit_size": 131072, 00:16:21.837 "max_aq_depth": 128, 00:16:21.837 "max_io_qpairs_per_ctrlr": 127, 00:16:21.837 "max_io_size": 131072, 00:16:21.837 "max_queue_depth": 128, 00:16:21.837 "num_shared_buffers": 511, 00:16:21.837 "sock_priority": 0, 00:16:21.837 "trtype": "TCP", 00:16:21.837 "zcopy": false 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "nvmf_create_subsystem", 00:16:21.837 "params": { 00:16:21.837 "allow_any_host": false, 00:16:21.837 "ana_reporting": false, 00:16:21.837 "max_cntlid": 65519, 00:16:21.837 "max_namespaces": 10, 00:16:21.837 "min_cntlid": 1, 00:16:21.837 "model_number": "SPDK bdev Controller", 00:16:21.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.837 "serial_number": "SPDK00000000000001" 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "nvmf_subsystem_add_host", 00:16:21.837 "params": { 00:16:21.837 "host": "nqn.2016-06.io.spdk:host1", 00:16:21.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.837 "psk": "/tmp/tmp.L0ReyH6Ojp" 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "nvmf_subsystem_add_ns", 00:16:21.837 "params": { 00:16:21.837 "namespace": { 00:16:21.837 "bdev_name": "malloc0", 00:16:21.837 "nguid": "C66F52A143CB47AEB3FA9F191284C11D", 00:16:21.837 "no_auto_visible": false, 00:16:21.837 "nsid": 1, 00:16:21.837 "uuid": "c66f52a1-43cb-47ae-b3fa-9f191284c11d" 00:16:21.837 }, 00:16:21.837 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:21.837 } 00:16:21.837 }, 00:16:21.837 { 00:16:21.837 "method": "nvmf_subsystem_add_listener", 00:16:21.837 "params": { 00:16:21.837 "listen_address": { 00:16:21.837 "adrfam": "IPv4", 00:16:21.837 "traddr": "10.0.0.2", 00:16:21.837 "trsvcid": "4420", 00:16:21.837 "trtype": "TCP" 00:16:21.837 }, 00:16:21.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.837 "secure_channel": true 00:16:21.837 } 00:16:21.837 } 00:16:21.837 ] 00:16:21.837 } 00:16:21.837 ] 00:16:21.837 }' 00:16:21.837 15:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:22.097 15:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:22.097 "subsystems": [ 00:16:22.097 { 00:16:22.097 "subsystem": "keyring", 00:16:22.097 "config": [] 00:16:22.097 }, 00:16:22.097 { 00:16:22.097 "subsystem": "iobuf", 00:16:22.097 "config": [ 00:16:22.097 { 00:16:22.097 "method": "iobuf_set_options", 00:16:22.097 "params": { 00:16:22.097 "large_bufsize": 135168, 00:16:22.097 "large_pool_count": 1024, 00:16:22.097 "small_bufsize": 8192, 00:16:22.097 "small_pool_count": 8192 00:16:22.097 } 00:16:22.097 } 00:16:22.097 ] 00:16:22.097 }, 00:16:22.097 { 00:16:22.097 "subsystem": "sock", 00:16:22.097 "config": [ 00:16:22.097 { 00:16:22.097 "method": "sock_set_default_impl", 00:16:22.097 "params": { 00:16:22.097 "impl_name": "posix" 00:16:22.097 } 00:16:22.097 }, 00:16:22.097 { 00:16:22.097 "method": "sock_impl_set_options", 00:16:22.097 "params": { 00:16:22.097 "enable_ktls": false, 00:16:22.097 "enable_placement_id": 0, 00:16:22.097 "enable_quickack": false, 00:16:22.097 "enable_recv_pipe": true, 00:16:22.097 "enable_zerocopy_send_client": false, 00:16:22.097 "enable_zerocopy_send_server": true, 00:16:22.097 "impl_name": "ssl", 00:16:22.097 "recv_buf_size": 4096, 00:16:22.097 "send_buf_size": 4096, 00:16:22.097 "tls_version": 0, 00:16:22.097 "zerocopy_threshold": 0 00:16:22.097 } 00:16:22.097 }, 00:16:22.097 { 00:16:22.097 "method": "sock_impl_set_options", 00:16:22.097 "params": { 00:16:22.097 "enable_ktls": false, 00:16:22.097 "enable_placement_id": 0, 00:16:22.097 "enable_quickack": false, 00:16:22.097 "enable_recv_pipe": true, 00:16:22.097 "enable_zerocopy_send_client": false, 00:16:22.097 "enable_zerocopy_send_server": true, 00:16:22.097 "impl_name": "posix", 00:16:22.097 "recv_buf_size": 2097152, 00:16:22.097 "send_buf_size": 2097152, 00:16:22.097 "tls_version": 0, 00:16:22.097 "zerocopy_threshold": 0 00:16:22.097 } 00:16:22.097 } 00:16:22.097 ] 00:16:22.097 }, 00:16:22.097 { 00:16:22.097 "subsystem": "vmd", 00:16:22.097 "config": [] 00:16:22.097 }, 00:16:22.097 { 00:16:22.097 "subsystem": "accel", 00:16:22.097 "config": [ 00:16:22.097 { 00:16:22.097 "method": "accel_set_options", 00:16:22.097 "params": { 00:16:22.097 "buf_count": 2048, 00:16:22.097 "large_cache_size": 16, 00:16:22.097 "sequence_count": 2048, 00:16:22.097 "small_cache_size": 128, 00:16:22.097 "task_count": 2048 00:16:22.097 } 00:16:22.097 } 00:16:22.097 ] 00:16:22.097 }, 00:16:22.097 { 00:16:22.098 "subsystem": "bdev", 00:16:22.098 "config": [ 00:16:22.098 { 00:16:22.098 "method": "bdev_set_options", 00:16:22.098 "params": { 00:16:22.098 "bdev_auto_examine": true, 00:16:22.098 "bdev_io_cache_size": 256, 00:16:22.098 "bdev_io_pool_size": 65535, 00:16:22.098 "iobuf_large_cache_size": 16, 00:16:22.098 "iobuf_small_cache_size": 128 00:16:22.098 } 00:16:22.098 }, 00:16:22.098 { 00:16:22.098 "method": "bdev_raid_set_options", 00:16:22.098 "params": { 00:16:22.098 "process_window_size_kb": 1024 00:16:22.098 } 00:16:22.098 }, 00:16:22.098 { 00:16:22.098 "method": "bdev_iscsi_set_options", 00:16:22.098 "params": { 00:16:22.098 "timeout_sec": 30 00:16:22.098 } 00:16:22.098 }, 00:16:22.098 { 00:16:22.098 "method": "bdev_nvme_set_options", 00:16:22.098 "params": { 00:16:22.098 "action_on_timeout": "none", 00:16:22.098 "allow_accel_sequence": false, 00:16:22.098 "arbitration_burst": 0, 00:16:22.098 "bdev_retry_count": 3, 00:16:22.098 "ctrlr_loss_timeout_sec": 0, 00:16:22.098 "delay_cmd_submit": true, 00:16:22.098 "dhchap_dhgroups": [ 00:16:22.098 "null", 00:16:22.098 "ffdhe2048", 00:16:22.098 "ffdhe3072", 00:16:22.098 "ffdhe4096", 00:16:22.098 "ffdhe6144", 00:16:22.098 "ffdhe8192" 00:16:22.098 ], 00:16:22.098 "dhchap_digests": [ 00:16:22.098 "sha256", 00:16:22.098 "sha384", 00:16:22.098 "sha512" 00:16:22.098 ], 00:16:22.098 "disable_auto_failback": false, 00:16:22.098 "fast_io_fail_timeout_sec": 0, 00:16:22.098 "generate_uuids": false, 00:16:22.098 "high_priority_weight": 0, 00:16:22.098 "io_path_stat": false, 00:16:22.098 "io_queue_requests": 512, 00:16:22.098 "keep_alive_timeout_ms": 10000, 00:16:22.098 "low_priority_weight": 0, 00:16:22.098 "medium_priority_weight": 0, 00:16:22.098 "nvme_adminq_poll_period_us": 10000, 00:16:22.098 "nvme_error_stat": false, 00:16:22.098 "nvme_ioq_poll_period_us": 0, 00:16:22.098 "rdma_cm_event_timeout_ms": 0, 00:16:22.098 "rdma_max_cq_size": 0, 00:16:22.098 "rdma_srq_size": 0, 00:16:22.098 "reconnect_delay_sec": 0, 00:16:22.098 "timeout_admin_us": 0, 00:16:22.098 "timeout_us": 0, 00:16:22.098 "transport_ack_timeout": 0, 00:16:22.098 "transport_retry_count": 4, 00:16:22.098 "transport_tos": 0 00:16:22.098 } 00:16:22.098 }, 00:16:22.098 { 00:16:22.098 "method": "bdev_nvme_attach_controller", 00:16:22.098 "params": { 00:16:22.098 "adrfam": "IPv4", 00:16:22.098 "ctrlr_loss_timeout_sec": 0, 00:16:22.098 "ddgst": false, 00:16:22.098 "fast_io_fail_timeout_sec": 0, 00:16:22.098 "hdgst": false, 00:16:22.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:22.098 "name": "TLSTEST", 00:16:22.098 "prchk_guard": false, 00:16:22.098 "prchk_reftag": false, 00:16:22.098 "psk": "/tmp/tmp.L0ReyH6Ojp", 00:16:22.098 "reconnect_delay_sec": 0, 00:16:22.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.098 "traddr": "10.0.0.2", 00:16:22.098 "trsvcid": "4420", 00:16:22.098 "trtype": "TCP" 00:16:22.098 } 00:16:22.098 }, 00:16:22.098 { 00:16:22.098 "method": "bdev_nvme_set_hotplug", 00:16:22.098 "params": { 00:16:22.098 "enable": false, 00:16:22.098 "period_us": 100000 00:16:22.098 } 00:16:22.098 }, 00:16:22.098 { 00:16:22.098 "method": "bdev_wait_for_examine" 00:16:22.098 } 00:16:22.098 ] 00:16:22.098 }, 00:16:22.098 { 00:16:22.098 "subsystem": "nbd", 00:16:22.098 "config": [] 00:16:22.098 } 00:16:22.098 ] 00:16:22.098 }' 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84305 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84305 ']' 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84305 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84305 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:22.098 killing process with pid 84305 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84305' 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84305 00:16:22.098 Received shutdown signal, test time was about 10.000000 seconds 00:16:22.098 00:16:22.098 Latency(us) 00:16:22.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.098 =================================================================================================================== 00:16:22.098 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.098 [2024-07-15 15:39:17.046063] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84305 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84202 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84202 ']' 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84202 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84202 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:22.098 killing process with pid 84202 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84202' 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84202 00:16:22.098 [2024-07-15 15:39:17.213503] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:22.098 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84202 00:16:22.358 15:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:22.358 15:39:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.358 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.358 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.358 15:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:22.358 "subsystems": [ 00:16:22.358 { 00:16:22.358 "subsystem": "keyring", 00:16:22.358 "config": [] 00:16:22.358 }, 00:16:22.358 { 00:16:22.358 "subsystem": "iobuf", 00:16:22.358 "config": [ 00:16:22.358 { 00:16:22.358 "method": "iobuf_set_options", 00:16:22.358 "params": { 00:16:22.358 "large_bufsize": 135168, 00:16:22.358 "large_pool_count": 1024, 00:16:22.358 "small_bufsize": 8192, 00:16:22.358 "small_pool_count": 8192 00:16:22.358 } 00:16:22.358 } 00:16:22.358 ] 00:16:22.358 }, 00:16:22.358 { 00:16:22.358 "subsystem": "sock", 00:16:22.358 "config": [ 00:16:22.358 { 00:16:22.358 "method": "sock_set_default_impl", 00:16:22.358 "params": { 00:16:22.359 "impl_name": "posix" 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "sock_impl_set_options", 00:16:22.359 "params": { 00:16:22.359 "enable_ktls": false, 00:16:22.359 "enable_placement_id": 0, 00:16:22.359 "enable_quickack": false, 00:16:22.359 "enable_recv_pipe": true, 00:16:22.359 "enable_zerocopy_send_client": false, 00:16:22.359 "enable_zerocopy_send_server": true, 00:16:22.359 "impl_name": "ssl", 00:16:22.359 "recv_buf_size": 4096, 00:16:22.359 "send_buf_size": 4096, 00:16:22.359 "tls_version": 0, 00:16:22.359 "zerocopy_threshold": 0 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "sock_impl_set_options", 00:16:22.359 "params": { 00:16:22.359 "enable_ktls": false, 00:16:22.359 "enable_placement_id": 0, 00:16:22.359 "enable_quickack": false, 00:16:22.359 "enable_recv_pipe": true, 00:16:22.359 "enable_zerocopy_send_client": false, 00:16:22.359 "enable_zerocopy_send_server": true, 00:16:22.359 "impl_name": "posix", 00:16:22.359 "recv_buf_size": 2097152, 00:16:22.359 "send_buf_size": 2097152, 00:16:22.359 "tls_version": 0, 00:16:22.359 "zerocopy_threshold": 0 00:16:22.359 } 00:16:22.359 } 00:16:22.359 ] 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "subsystem": "vmd", 00:16:22.359 "config": [] 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "subsystem": "accel", 00:16:22.359 "config": [ 00:16:22.359 { 00:16:22.359 "method": "accel_set_options", 00:16:22.359 "params": { 00:16:22.359 "buf_count": 2048, 00:16:22.359 "large_cache_size": 16, 00:16:22.359 "sequence_count": 2048, 00:16:22.359 "small_cache_size": 128, 00:16:22.359 "task_count": 2048 00:16:22.359 } 00:16:22.359 } 00:16:22.359 ] 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "subsystem": "bdev", 00:16:22.359 "config": [ 00:16:22.359 { 00:16:22.359 "method": "bdev_set_options", 00:16:22.359 "params": { 00:16:22.359 "bdev_auto_examine": true, 00:16:22.359 "bdev_io_cache_size": 256, 00:16:22.359 "bdev_io_pool_size": 65535, 00:16:22.359 "iobuf_large_cache_size": 16, 00:16:22.359 "iobuf_small_cache_size": 128 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "bdev_raid_set_options", 00:16:22.359 "params": { 00:16:22.359 "process_window_size_kb": 1024 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "bdev_iscsi_set_options", 00:16:22.359 "params": { 00:16:22.359 "timeout_sec": 30 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "bdev_nvme_set_options", 00:16:22.359 "params": { 00:16:22.359 "action_on_timeout": "none", 00:16:22.359 "allow_accel_sequence": false, 00:16:22.359 "arbitration_burst": 0, 00:16:22.359 "bdev_retry_count": 3, 00:16:22.359 "ctrlr_loss_timeout_sec": 0, 00:16:22.359 "delay_cmd_submit": true, 00:16:22.359 "dhchap_dhgroups": [ 00:16:22.359 "null", 00:16:22.359 "ffdhe2048", 00:16:22.359 "ffdhe3072", 00:16:22.359 "ffdhe4096", 00:16:22.359 "ffdhe6144", 00:16:22.359 "ffdhe8192" 00:16:22.359 ], 00:16:22.359 "dhchap_digests": [ 00:16:22.359 "sha256", 00:16:22.359 "sha384", 00:16:22.359 "sha512" 00:16:22.359 ], 00:16:22.359 "disable_auto_failback": false, 00:16:22.359 "fast_io_fail_timeout_sec": 0, 00:16:22.359 "generate_uuids": false, 00:16:22.359 "high_priority_weight": 0, 00:16:22.359 "io_path_stat": false, 00:16:22.359 "io_queue_requests": 0, 00:16:22.359 "keep_alive_timeout_ms": 10000, 00:16:22.359 "low_priority_weight": 0, 00:16:22.359 "medium_priority_weight": 0, 00:16:22.359 "nvme_adminq_poll_period_us": 10000, 00:16:22.359 "nvme_error_stat": false, 00:16:22.359 "nvme_ioq_poll_period_us": 0, 00:16:22.359 "rdma_cm_event_timeout_ms": 0, 00:16:22.359 "rdma_max_cq_size": 0, 00:16:22.359 "rdma_srq_size": 0, 00:16:22.359 "reconnect_delay_sec": 0, 00:16:22.359 "timeout_admin_us": 0, 00:16:22.359 "timeout_us": 0, 00:16:22.359 "transport_ack_timeout": 0, 00:16:22.359 "transport_retry_count": 4, 00:16:22.359 "transport_tos": 0 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "bdev_nvme_set_hotplug", 00:16:22.359 "params": { 00:16:22.359 "enable": false, 00:16:22.359 "period_us": 100000 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "bdev_malloc_create", 00:16:22.359 "params": { 00:16:22.359 "block_size": 4096, 00:16:22.359 "name": "malloc0", 00:16:22.359 "num_blocks": 8192, 00:16:22.359 "optimal_io_boundary": 0, 00:16:22.359 "physical_block_size": 4096, 00:16:22.359 "uuid": "c66f52a1-43cb-47ae-b3fa-9f191284c11d" 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "bdev_wait_for_examine" 00:16:22.359 } 00:16:22.359 ] 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "subsystem": "nbd", 00:16:22.359 "config": [] 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "subsystem": "scheduler", 00:16:22.359 "config": [ 00:16:22.359 { 00:16:22.359 "method": "framework_set_scheduler", 00:16:22.359 "params": { 00:16:22.359 "name": "static" 00:16:22.359 } 00:16:22.359 } 00:16:22.359 ] 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "subsystem": "nvmf", 00:16:22.359 "config": [ 00:16:22.359 { 00:16:22.359 "method": "nvmf_set_config", 00:16:22.359 "params": { 00:16:22.359 "admin_cmd_passthru": { 00:16:22.359 "identify_ctrlr": false 00:16:22.359 }, 00:16:22.359 "discovery_filter": "match_any" 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "nvmf_set_max_subsystems", 00:16:22.359 "params": { 00:16:22.359 "max_subsystems": 1024 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "nvmf_set_crdt", 00:16:22.359 "params": { 00:16:22.359 "crdt1": 0, 00:16:22.359 "crdt2": 0, 00:16:22.359 "crdt3": 0 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "nvmf_create_transport", 00:16:22.359 "params": { 00:16:22.359 "abort_timeout_sec": 1, 00:16:22.359 "ack_timeout": 0, 00:16:22.359 "buf_cache_size": 4294967295, 00:16:22.359 "c2h_success": false, 00:16:22.359 "data_wr_pool_size": 0, 00:16:22.359 "dif_insert_or_strip": false, 00:16:22.359 "in_capsule_data_size": 4096, 00:16:22.359 "io_unit_size": 131072, 00:16:22.359 "max_aq_depth": 128, 00:16:22.359 "max_io_qpairs_per_ctrlr": 127, 00:16:22.359 "max_io_size": 131072, 00:16:22.359 "max_queue_depth": 128, 00:16:22.359 "num_shared_buffers": 511, 00:16:22.359 "sock_priority": 0, 00:16:22.359 "trtype": "TCP", 00:16:22.359 "zcopy": false 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "nvmf_create_subsystem", 00:16:22.359 "params": { 00:16:22.359 "allow_any_host": false, 00:16:22.359 "ana_reporting": false, 00:16:22.359 "max_cntlid": 65519, 00:16:22.359 "max_namespaces": 10, 00:16:22.359 "min_cntlid": 1, 00:16:22.359 "model_number": "SPDK bdev Controller", 00:16:22.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.359 "serial_number": "SPDK00000000000001" 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "nvmf_subsystem_add_host", 00:16:22.359 "params": { 00:16:22.359 "host": "nqn.2016-06.io.spdk:host1", 00:16:22.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.359 "psk": "/tmp/tmp.L0ReyH6Ojp" 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "nvmf_subsystem_add_ns", 00:16:22.359 "params": { 00:16:22.359 "namespace": { 00:16:22.359 "bdev_name": "malloc0", 00:16:22.359 "nguid": "C66F52A143CB47AEB3FA9F191284C11D", 00:16:22.359 "no_auto_visible": false, 00:16:22.359 "nsid": 1, 00:16:22.359 "uuid": "c66f52a1-43cb-47ae-b3fa-9f191284c11d" 00:16:22.359 }, 00:16:22.359 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:22.359 } 00:16:22.359 }, 00:16:22.359 { 00:16:22.359 "method": "nvmf_subsystem_add_listener", 00:16:22.359 "params": { 00:16:22.359 "listen_address": { 00:16:22.359 "adrfam": "IPv4", 00:16:22.359 "traddr": "10.0.0.2", 00:16:22.359 "trsvcid": "4420", 00:16:22.359 "trtype": "TCP" 00:16:22.359 }, 00:16:22.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.360 "secure_channel": true 00:16:22.360 } 00:16:22.360 } 00:16:22.360 ] 00:16:22.360 } 00:16:22.360 ] 00:16:22.360 }' 00:16:22.360 15:39:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84378 00:16:22.360 15:39:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84378 00:16:22.360 15:39:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:22.360 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84378 ']' 00:16:22.360 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.360 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.360 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.360 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.360 15:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.360 [2024-07-15 15:39:17.415412] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:22.360 [2024-07-15 15:39:17.415477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.619 [2024-07-15 15:39:17.545789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.619 [2024-07-15 15:39:17.598869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.619 [2024-07-15 15:39:17.598918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.619 [2024-07-15 15:39:17.598945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.619 [2024-07-15 15:39:17.598953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.619 [2024-07-15 15:39:17.598960] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.619 [2024-07-15 15:39:17.599059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.877 [2024-07-15 15:39:17.776100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.877 [2024-07-15 15:39:17.792046] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:22.877 [2024-07-15 15:39:17.808048] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:22.877 [2024-07-15 15:39:17.808224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84422 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84422 /var/tmp/bdevperf.sock 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84422 ']' 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:23.444 15:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:23.444 "subsystems": [ 00:16:23.444 { 00:16:23.444 "subsystem": "keyring", 00:16:23.444 "config": [] 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "subsystem": "iobuf", 00:16:23.444 "config": [ 00:16:23.444 { 00:16:23.444 "method": "iobuf_set_options", 00:16:23.444 "params": { 00:16:23.444 "large_bufsize": 135168, 00:16:23.444 "large_pool_count": 1024, 00:16:23.444 "small_bufsize": 8192, 00:16:23.444 "small_pool_count": 8192 00:16:23.444 } 00:16:23.444 } 00:16:23.444 ] 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "subsystem": "sock", 00:16:23.444 "config": [ 00:16:23.444 { 00:16:23.444 "method": "sock_set_default_impl", 00:16:23.444 "params": { 00:16:23.444 "impl_name": "posix" 00:16:23.444 } 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "method": "sock_impl_set_options", 00:16:23.444 "params": { 00:16:23.444 "enable_ktls": false, 00:16:23.444 "enable_placement_id": 0, 00:16:23.444 "enable_quickack": false, 00:16:23.444 "enable_recv_pipe": true, 00:16:23.444 "enable_zerocopy_send_client": false, 00:16:23.444 "enable_zerocopy_send_server": true, 00:16:23.444 "impl_name": "ssl", 00:16:23.444 "recv_buf_size": 4096, 00:16:23.444 "send_buf_size": 4096, 00:16:23.444 "tls_version": 0, 00:16:23.444 "zerocopy_threshold": 0 00:16:23.444 } 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "method": "sock_impl_set_options", 00:16:23.444 "params": { 00:16:23.444 "enable_ktls": false, 00:16:23.444 "enable_placement_id": 0, 00:16:23.444 "enable_quickack": false, 00:16:23.444 "enable_recv_pipe": true, 00:16:23.444 "enable_zerocopy_send_client": false, 00:16:23.444 "enable_zerocopy_send_server": true, 00:16:23.444 "impl_name": "posix", 00:16:23.444 "recv_buf_size": 2097152, 00:16:23.444 "send_buf_size": 2097152, 00:16:23.444 "tls_version": 0, 00:16:23.444 "zerocopy_threshold": 0 00:16:23.444 } 00:16:23.444 } 00:16:23.444 ] 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "subsystem": "vmd", 00:16:23.444 "config": [] 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "subsystem": "accel", 00:16:23.444 "config": [ 00:16:23.444 { 00:16:23.444 "method": "accel_set_options", 00:16:23.444 "params": { 00:16:23.444 "buf_count": 2048, 00:16:23.444 "large_cache_size": 16, 00:16:23.444 "sequence_count": 2048, 00:16:23.444 "small_cache_size": 128, 00:16:23.444 "task_count": 2048 00:16:23.444 } 00:16:23.444 } 00:16:23.444 ] 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "subsystem": "bdev", 00:16:23.444 "config": [ 00:16:23.444 { 00:16:23.444 "method": "bdev_set_options", 00:16:23.444 "params": { 00:16:23.444 "bdev_auto_examine": true, 00:16:23.444 "bdev_io_cache_size": 256, 00:16:23.444 "bdev_io_pool_size": 65535, 00:16:23.444 "iobuf_large_cache_size": 16, 00:16:23.444 "iobuf_small_cache_size": 128 00:16:23.444 } 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "method": "bdev_raid_set_options", 00:16:23.444 "params": { 00:16:23.444 "process_window_size_kb": 1024 00:16:23.444 } 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "method": "bdev_iscsi_set_options", 00:16:23.444 "params": { 00:16:23.444 "timeout_sec": 30 00:16:23.444 } 00:16:23.444 }, 00:16:23.444 { 00:16:23.444 "method": "bdev_nvme_set_options", 00:16:23.444 "params": { 00:16:23.444 "action_on_timeout": "none", 00:16:23.444 "allow_accel_sequence": false, 00:16:23.444 "arbitration_burst": 0, 00:16:23.444 "bdev_retry_count": 3, 00:16:23.444 "ctrlr_loss_timeout_sec": 0, 00:16:23.444 "delay_cmd_submit": true, 00:16:23.444 "dhchap_dhgroups": [ 00:16:23.444 "null", 00:16:23.444 "ffdhe2048", 00:16:23.444 "ffdhe3072", 00:16:23.444 "ffdhe4096", 00:16:23.444 "ffdhe6144", 00:16:23.444 "ffdhe8192" 00:16:23.444 ], 00:16:23.444 "dhchap_digests": [ 00:16:23.444 "sha256", 00:16:23.444 "sha384", 00:16:23.444 "sha512" 00:16:23.444 ], 00:16:23.444 "disable_auto_failback": false, 00:16:23.444 "fast_io_fail_timeout_sec": 0, 00:16:23.444 "generate_uuids": false, 00:16:23.444 "high_priority_weight": 0, 00:16:23.445 "io_path_stat": false, 00:16:23.445 "io_queue_requests": 512, 00:16:23.445 "keep_alive_timeout_ms": 10000, 00:16:23.445 "low_priority_weight": 0, 00:16:23.445 "medium_priority_weight": 0, 00:16:23.445 "nvme_adminq_poll_period_us": 10000, 00:16:23.445 "nvme_error_stat": false, 00:16:23.445 "nvme_ioq_poll_period_us": 0, 00:16:23.445 "rdma_cm_event_timeout_ms": 0, 00:16:23.445 "rdma_max_cq_size": 0, 00:16:23.445 "rdma_srq_size": 0, 00:16:23.445 "reconnect_delay_sec": 0, 00:16:23.445 "timeout_admin_us": 0, 00:16:23.445 "timeout_us": 0, 00:16:23.445 "transport_ack_timeout": 0, 00:16:23.445 "transport_retry_count": 4, 00:16:23.445 "transport_tos": 0 00:16:23.445 } 00:16:23.445 }, 00:16:23.445 { 00:16:23.445 "method": "bdev_nvme_attach_controller", 00:16:23.445 "params": { 00:16:23.445 "adrfam": "IPv4", 00:16:23.445 "ctrlr_loss_timeout_sec": 0, 00:16:23.445 "ddgst": false, 00:16:23.445 "fast_io_fail_timeout_sec": 0, 00:16:23.445 "hdgst": false, 00:16:23.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.445 "name": "TLSTEST", 00:16:23.445 "prchk_guard": false, 00:16:23.445 "prchk_reftag": false, 00:16:23.445 "psk": "/tmp/tmp.L0ReyH6Ojp", 00:16:23.445 "reconnect_delay_sec": 0, 00:16:23.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.445 "traddr": "10.0.0.2", 00:16:23.445 "trsvcid": "4420", 00:16:23.445 "trtype": "TCP" 00:16:23.445 } 00:16:23.445 }, 00:16:23.445 { 00:16:23.445 "method": "bdev_nvme_set_hotplug", 00:16:23.445 "params": { 00:16:23.445 "enable": false, 00:16:23.445 "period_us": 100000 00:16:23.445 } 00:16:23.445 }, 00:16:23.445 { 00:16:23.445 "method": "bdev_wait_for_examine" 00:16:23.445 } 00:16:23.445 ] 00:16:23.445 }, 00:16:23.445 { 00:16:23.445 "subsystem": "nbd", 00:16:23.445 "config": [] 00:16:23.445 } 00:16:23.445 ] 00:16:23.445 }' 00:16:23.445 [2024-07-15 15:39:18.437655] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:23.445 [2024-07-15 15:39:18.437744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84422 ] 00:16:23.703 [2024-07-15 15:39:18.578138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.703 [2024-07-15 15:39:18.646612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.703 [2024-07-15 15:39:18.771506] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:23.703 [2024-07-15 15:39:18.771881] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:24.638 15:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.638 15:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:24.638 15:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:24.638 Running I/O for 10 seconds... 00:16:34.626 00:16:34.626 Latency(us) 00:16:34.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.626 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:34.626 Verification LBA range: start 0x0 length 0x2000 00:16:34.626 TLSTESTn1 : 10.02 4322.74 16.89 0.00 0.00 29546.21 5659.93 18707.55 00:16:34.626 =================================================================================================================== 00:16:34.626 Total : 4322.74 16.89 0.00 0.00 29546.21 5659.93 18707.55 00:16:34.626 0 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84422 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84422 ']' 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84422 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84422 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:34.626 killing process with pid 84422 00:16:34.626 Received shutdown signal, test time was about 10.000000 seconds 00:16:34.626 00:16:34.626 Latency(us) 00:16:34.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.626 =================================================================================================================== 00:16:34.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84422' 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84422 00:16:34.626 [2024-07-15 15:39:29.613528] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:34.626 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84422 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84378 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84378 ']' 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84378 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84378 00:16:34.885 killing process with pid 84378 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84378' 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84378 00:16:34.885 [2024-07-15 15:39:29.789349] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84378 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84567 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84567 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84567 ']' 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.885 15:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.885 [2024-07-15 15:39:29.988260] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:34.885 [2024-07-15 15:39:29.988343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.144 [2024-07-15 15:39:30.125493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.144 [2024-07-15 15:39:30.194796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.144 [2024-07-15 15:39:30.195130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.144 [2024-07-15 15:39:30.195324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.144 [2024-07-15 15:39:30.195611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.144 [2024-07-15 15:39:30.195787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.144 [2024-07-15 15:39:30.196006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.080 15:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.080 15:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:36.080 15:39:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.080 15:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.080 15:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.080 15:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.080 15:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.L0ReyH6Ojp 00:16:36.080 15:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.L0ReyH6Ojp 00:16:36.080 15:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:36.338 [2024-07-15 15:39:31.220400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.338 15:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:36.338 15:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:36.596 [2024-07-15 15:39:31.648473] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:36.596 [2024-07-15 15:39:31.648708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.596 15:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:36.855 malloc0 00:16:36.855 15:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:37.113 15:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L0ReyH6Ojp 00:16:37.372 [2024-07-15 15:39:32.275122] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:37.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84664 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84664 /var/tmp/bdevperf.sock 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84664 ']' 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.372 15:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.372 [2024-07-15 15:39:32.335844] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:37.372 [2024-07-15 15:39:32.335945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84664 ] 00:16:37.372 [2024-07-15 15:39:32.473600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.630 [2024-07-15 15:39:32.542281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.630 15:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.630 15:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:37.630 15:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L0ReyH6Ojp 00:16:37.888 15:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:38.147 [2024-07-15 15:39:33.039313] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:38.147 nvme0n1 00:16:38.147 15:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:38.147 Running I/O for 1 seconds... 00:16:39.520 00:16:39.520 Latency(us) 00:16:39.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.520 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:39.520 Verification LBA range: start 0x0 length 0x2000 00:16:39.520 nvme0n1 : 1.02 4390.47 17.15 0.00 0.00 28865.24 6345.08 18350.08 00:16:39.520 =================================================================================================================== 00:16:39.520 Total : 4390.47 17.15 0.00 0.00 28865.24 6345.08 18350.08 00:16:39.520 0 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 84664 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84664 ']' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84664 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84664 00:16:39.520 killing process with pid 84664 00:16:39.520 Received shutdown signal, test time was about 1.000000 seconds 00:16:39.520 00:16:39.520 Latency(us) 00:16:39.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.520 =================================================================================================================== 00:16:39.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84664' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84664 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84664 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84567 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84567 ']' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84567 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84567 00:16:39.520 killing process with pid 84567 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84567' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84567 00:16:39.520 [2024-07-15 15:39:34.452426] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84567 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84726 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84726 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84726 ']' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.520 15:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.779 [2024-07-15 15:39:34.665869] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:39.779 [2024-07-15 15:39:34.665970] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.779 [2024-07-15 15:39:34.802841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.779 [2024-07-15 15:39:34.856256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.779 [2024-07-15 15:39:34.856317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.779 [2024-07-15 15:39:34.856343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.779 [2024-07-15 15:39:34.856351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.779 [2024-07-15 15:39:34.856357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.779 [2024-07-15 15:39:34.856382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.712 [2024-07-15 15:39:35.679422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.712 malloc0 00:16:40.712 [2024-07-15 15:39:35.705224] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:40.712 [2024-07-15 15:39:35.705439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=84776 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 84776 /var/tmp/bdevperf.sock 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84776 ']' 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.712 15:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.712 [2024-07-15 15:39:35.781660] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:40.712 [2024-07-15 15:39:35.781759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84776 ] 00:16:40.971 [2024-07-15 15:39:35.918097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.971 [2024-07-15 15:39:35.986396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.971 15:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.971 15:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:40.971 15:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L0ReyH6Ojp 00:16:41.228 15:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:41.486 [2024-07-15 15:39:36.494455] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:41.486 nvme0n1 00:16:41.486 15:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:41.743 Running I/O for 1 seconds... 00:16:42.728 00:16:42.728 Latency(us) 00:16:42.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.728 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:42.728 Verification LBA range: start 0x0 length 0x2000 00:16:42.728 nvme0n1 : 1.02 4315.87 16.86 0.00 0.00 29243.90 6494.02 18469.24 00:16:42.728 =================================================================================================================== 00:16:42.728 Total : 4315.87 16.86 0.00 0.00 29243.90 6494.02 18469.24 00:16:42.728 0 00:16:42.728 15:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:16:42.728 15:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.728 15:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.728 15:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.988 15:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:16:42.988 "subsystems": [ 00:16:42.988 { 00:16:42.988 "subsystem": "keyring", 00:16:42.988 "config": [ 00:16:42.988 { 00:16:42.988 "method": "keyring_file_add_key", 00:16:42.988 "params": { 00:16:42.988 "name": "key0", 00:16:42.988 "path": "/tmp/tmp.L0ReyH6Ojp" 00:16:42.988 } 00:16:42.988 } 00:16:42.988 ] 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "subsystem": "iobuf", 00:16:42.988 "config": [ 00:16:42.988 { 00:16:42.988 "method": "iobuf_set_options", 00:16:42.988 "params": { 00:16:42.988 "large_bufsize": 135168, 00:16:42.988 "large_pool_count": 1024, 00:16:42.988 "small_bufsize": 8192, 00:16:42.988 "small_pool_count": 8192 00:16:42.988 } 00:16:42.988 } 00:16:42.988 ] 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "subsystem": "sock", 00:16:42.988 "config": [ 00:16:42.988 { 00:16:42.988 "method": "sock_set_default_impl", 00:16:42.988 "params": { 00:16:42.988 "impl_name": "posix" 00:16:42.988 } 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "method": "sock_impl_set_options", 00:16:42.988 "params": { 00:16:42.988 "enable_ktls": false, 00:16:42.988 "enable_placement_id": 0, 00:16:42.988 "enable_quickack": false, 00:16:42.988 "enable_recv_pipe": true, 00:16:42.988 "enable_zerocopy_send_client": false, 00:16:42.988 "enable_zerocopy_send_server": true, 00:16:42.988 "impl_name": "ssl", 00:16:42.988 "recv_buf_size": 4096, 00:16:42.988 "send_buf_size": 4096, 00:16:42.988 "tls_version": 0, 00:16:42.988 "zerocopy_threshold": 0 00:16:42.988 } 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "method": "sock_impl_set_options", 00:16:42.988 "params": { 00:16:42.988 "enable_ktls": false, 00:16:42.988 "enable_placement_id": 0, 00:16:42.988 "enable_quickack": false, 00:16:42.988 "enable_recv_pipe": true, 00:16:42.988 "enable_zerocopy_send_client": false, 00:16:42.988 "enable_zerocopy_send_server": true, 00:16:42.988 "impl_name": "posix", 00:16:42.988 "recv_buf_size": 2097152, 00:16:42.988 "send_buf_size": 2097152, 00:16:42.988 "tls_version": 0, 00:16:42.988 "zerocopy_threshold": 0 00:16:42.988 } 00:16:42.988 } 00:16:42.988 ] 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "subsystem": "vmd", 00:16:42.988 "config": [] 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "subsystem": "accel", 00:16:42.988 "config": [ 00:16:42.988 { 00:16:42.988 "method": "accel_set_options", 00:16:42.988 "params": { 00:16:42.988 "buf_count": 2048, 00:16:42.988 "large_cache_size": 16, 00:16:42.988 "sequence_count": 2048, 00:16:42.988 "small_cache_size": 128, 00:16:42.988 "task_count": 2048 00:16:42.988 } 00:16:42.988 } 00:16:42.988 ] 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "subsystem": "bdev", 00:16:42.988 "config": [ 00:16:42.988 { 00:16:42.988 "method": "bdev_set_options", 00:16:42.988 "params": { 00:16:42.988 "bdev_auto_examine": true, 00:16:42.988 "bdev_io_cache_size": 256, 00:16:42.988 "bdev_io_pool_size": 65535, 00:16:42.988 "iobuf_large_cache_size": 16, 00:16:42.988 "iobuf_small_cache_size": 128 00:16:42.988 } 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "method": "bdev_raid_set_options", 00:16:42.988 "params": { 00:16:42.988 "process_window_size_kb": 1024 00:16:42.988 } 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "method": "bdev_iscsi_set_options", 00:16:42.988 "params": { 00:16:42.988 "timeout_sec": 30 00:16:42.988 } 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "method": "bdev_nvme_set_options", 00:16:42.988 "params": { 00:16:42.988 "action_on_timeout": "none", 00:16:42.988 "allow_accel_sequence": false, 00:16:42.988 "arbitration_burst": 0, 00:16:42.988 "bdev_retry_count": 3, 00:16:42.988 "ctrlr_loss_timeout_sec": 0, 00:16:42.988 "delay_cmd_submit": true, 00:16:42.988 "dhchap_dhgroups": [ 00:16:42.988 "null", 00:16:42.988 "ffdhe2048", 00:16:42.988 "ffdhe3072", 00:16:42.988 "ffdhe4096", 00:16:42.988 "ffdhe6144", 00:16:42.988 "ffdhe8192" 00:16:42.988 ], 00:16:42.988 "dhchap_digests": [ 00:16:42.988 "sha256", 00:16:42.988 "sha384", 00:16:42.988 "sha512" 00:16:42.988 ], 00:16:42.988 "disable_auto_failback": false, 00:16:42.988 "fast_io_fail_timeout_sec": 0, 00:16:42.988 "generate_uuids": false, 00:16:42.988 "high_priority_weight": 0, 00:16:42.988 "io_path_stat": false, 00:16:42.988 "io_queue_requests": 0, 00:16:42.988 "keep_alive_timeout_ms": 10000, 00:16:42.988 "low_priority_weight": 0, 00:16:42.988 "medium_priority_weight": 0, 00:16:42.988 "nvme_adminq_poll_period_us": 10000, 00:16:42.988 "nvme_error_stat": false, 00:16:42.988 "nvme_ioq_poll_period_us": 0, 00:16:42.988 "rdma_cm_event_timeout_ms": 0, 00:16:42.988 "rdma_max_cq_size": 0, 00:16:42.988 "rdma_srq_size": 0, 00:16:42.988 "reconnect_delay_sec": 0, 00:16:42.988 "timeout_admin_us": 0, 00:16:42.988 "timeout_us": 0, 00:16:42.988 "transport_ack_timeout": 0, 00:16:42.988 "transport_retry_count": 4, 00:16:42.988 "transport_tos": 0 00:16:42.988 } 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "method": "bdev_nvme_set_hotplug", 00:16:42.988 "params": { 00:16:42.988 "enable": false, 00:16:42.988 "period_us": 100000 00:16:42.988 } 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "method": "bdev_malloc_create", 00:16:42.988 "params": { 00:16:42.988 "block_size": 4096, 00:16:42.988 "name": "malloc0", 00:16:42.988 "num_blocks": 8192, 00:16:42.988 "optimal_io_boundary": 0, 00:16:42.988 "physical_block_size": 4096, 00:16:42.988 "uuid": "41111f64-39a2-4f46-a351-996467b3b064" 00:16:42.988 } 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "method": "bdev_wait_for_examine" 00:16:42.988 } 00:16:42.988 ] 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "subsystem": "nbd", 00:16:42.988 "config": [] 00:16:42.988 }, 00:16:42.988 { 00:16:42.988 "subsystem": "scheduler", 00:16:42.988 "config": [ 00:16:42.989 { 00:16:42.989 "method": "framework_set_scheduler", 00:16:42.989 "params": { 00:16:42.989 "name": "static" 00:16:42.989 } 00:16:42.989 } 00:16:42.989 ] 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "subsystem": "nvmf", 00:16:42.989 "config": [ 00:16:42.989 { 00:16:42.989 "method": "nvmf_set_config", 00:16:42.989 "params": { 00:16:42.989 "admin_cmd_passthru": { 00:16:42.989 "identify_ctrlr": false 00:16:42.989 }, 00:16:42.989 "discovery_filter": "match_any" 00:16:42.989 } 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "method": "nvmf_set_max_subsystems", 00:16:42.989 "params": { 00:16:42.989 "max_subsystems": 1024 00:16:42.989 } 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "method": "nvmf_set_crdt", 00:16:42.989 "params": { 00:16:42.989 "crdt1": 0, 00:16:42.989 "crdt2": 0, 00:16:42.989 "crdt3": 0 00:16:42.989 } 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "method": "nvmf_create_transport", 00:16:42.989 "params": { 00:16:42.989 "abort_timeout_sec": 1, 00:16:42.989 "ack_timeout": 0, 00:16:42.989 "buf_cache_size": 4294967295, 00:16:42.989 "c2h_success": false, 00:16:42.989 "data_wr_pool_size": 0, 00:16:42.989 "dif_insert_or_strip": false, 00:16:42.989 "in_capsule_data_size": 4096, 00:16:42.989 "io_unit_size": 131072, 00:16:42.989 "max_aq_depth": 128, 00:16:42.989 "max_io_qpairs_per_ctrlr": 127, 00:16:42.989 "max_io_size": 131072, 00:16:42.989 "max_queue_depth": 128, 00:16:42.989 "num_shared_buffers": 511, 00:16:42.989 "sock_priority": 0, 00:16:42.989 "trtype": "TCP", 00:16:42.989 "zcopy": false 00:16:42.989 } 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "method": "nvmf_create_subsystem", 00:16:42.989 "params": { 00:16:42.989 "allow_any_host": false, 00:16:42.989 "ana_reporting": false, 00:16:42.989 "max_cntlid": 65519, 00:16:42.989 "max_namespaces": 32, 00:16:42.989 "min_cntlid": 1, 00:16:42.989 "model_number": "SPDK bdev Controller", 00:16:42.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.989 "serial_number": "00000000000000000000" 00:16:42.989 } 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "method": "nvmf_subsystem_add_host", 00:16:42.989 "params": { 00:16:42.989 "host": "nqn.2016-06.io.spdk:host1", 00:16:42.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.989 "psk": "key0" 00:16:42.989 } 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "method": "nvmf_subsystem_add_ns", 00:16:42.989 "params": { 00:16:42.989 "namespace": { 00:16:42.989 "bdev_name": "malloc0", 00:16:42.989 "nguid": "41111F6439A24F46A351996467B3B064", 00:16:42.989 "no_auto_visible": false, 00:16:42.989 "nsid": 1, 00:16:42.989 "uuid": "41111f64-39a2-4f46-a351-996467b3b064" 00:16:42.989 }, 00:16:42.989 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:42.989 } 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "method": "nvmf_subsystem_add_listener", 00:16:42.989 "params": { 00:16:42.989 "listen_address": { 00:16:42.989 "adrfam": "IPv4", 00:16:42.989 "traddr": "10.0.0.2", 00:16:42.989 "trsvcid": "4420", 00:16:42.989 "trtype": "TCP" 00:16:42.989 }, 00:16:42.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.989 "secure_channel": true 00:16:42.989 } 00:16:42.989 } 00:16:42.989 ] 00:16:42.989 } 00:16:42.989 ] 00:16:42.989 }' 00:16:42.989 15:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:43.249 15:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:16:43.249 "subsystems": [ 00:16:43.249 { 00:16:43.249 "subsystem": "keyring", 00:16:43.249 "config": [ 00:16:43.249 { 00:16:43.249 "method": "keyring_file_add_key", 00:16:43.249 "params": { 00:16:43.249 "name": "key0", 00:16:43.249 "path": "/tmp/tmp.L0ReyH6Ojp" 00:16:43.249 } 00:16:43.249 } 00:16:43.249 ] 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "subsystem": "iobuf", 00:16:43.249 "config": [ 00:16:43.249 { 00:16:43.249 "method": "iobuf_set_options", 00:16:43.249 "params": { 00:16:43.249 "large_bufsize": 135168, 00:16:43.249 "large_pool_count": 1024, 00:16:43.249 "small_bufsize": 8192, 00:16:43.249 "small_pool_count": 8192 00:16:43.249 } 00:16:43.249 } 00:16:43.249 ] 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "subsystem": "sock", 00:16:43.249 "config": [ 00:16:43.249 { 00:16:43.249 "method": "sock_set_default_impl", 00:16:43.249 "params": { 00:16:43.249 "impl_name": "posix" 00:16:43.249 } 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "method": "sock_impl_set_options", 00:16:43.249 "params": { 00:16:43.249 "enable_ktls": false, 00:16:43.249 "enable_placement_id": 0, 00:16:43.249 "enable_quickack": false, 00:16:43.249 "enable_recv_pipe": true, 00:16:43.249 "enable_zerocopy_send_client": false, 00:16:43.249 "enable_zerocopy_send_server": true, 00:16:43.249 "impl_name": "ssl", 00:16:43.249 "recv_buf_size": 4096, 00:16:43.249 "send_buf_size": 4096, 00:16:43.249 "tls_version": 0, 00:16:43.249 "zerocopy_threshold": 0 00:16:43.249 } 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "method": "sock_impl_set_options", 00:16:43.249 "params": { 00:16:43.249 "enable_ktls": false, 00:16:43.249 "enable_placement_id": 0, 00:16:43.249 "enable_quickack": false, 00:16:43.249 "enable_recv_pipe": true, 00:16:43.249 "enable_zerocopy_send_client": false, 00:16:43.249 "enable_zerocopy_send_server": true, 00:16:43.249 "impl_name": "posix", 00:16:43.249 "recv_buf_size": 2097152, 00:16:43.249 "send_buf_size": 2097152, 00:16:43.249 "tls_version": 0, 00:16:43.249 "zerocopy_threshold": 0 00:16:43.249 } 00:16:43.249 } 00:16:43.249 ] 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "subsystem": "vmd", 00:16:43.249 "config": [] 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "subsystem": "accel", 00:16:43.249 "config": [ 00:16:43.249 { 00:16:43.249 "method": "accel_set_options", 00:16:43.249 "params": { 00:16:43.249 "buf_count": 2048, 00:16:43.249 "large_cache_size": 16, 00:16:43.249 "sequence_count": 2048, 00:16:43.249 "small_cache_size": 128, 00:16:43.249 "task_count": 2048 00:16:43.249 } 00:16:43.249 } 00:16:43.249 ] 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "subsystem": "bdev", 00:16:43.249 "config": [ 00:16:43.249 { 00:16:43.249 "method": "bdev_set_options", 00:16:43.249 "params": { 00:16:43.249 "bdev_auto_examine": true, 00:16:43.249 "bdev_io_cache_size": 256, 00:16:43.249 "bdev_io_pool_size": 65535, 00:16:43.249 "iobuf_large_cache_size": 16, 00:16:43.249 "iobuf_small_cache_size": 128 00:16:43.249 } 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "method": "bdev_raid_set_options", 00:16:43.249 "params": { 00:16:43.249 "process_window_size_kb": 1024 00:16:43.249 } 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "method": "bdev_iscsi_set_options", 00:16:43.249 "params": { 00:16:43.249 "timeout_sec": 30 00:16:43.249 } 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "method": "bdev_nvme_set_options", 00:16:43.249 "params": { 00:16:43.249 "action_on_timeout": "none", 00:16:43.249 "allow_accel_sequence": false, 00:16:43.249 "arbitration_burst": 0, 00:16:43.249 "bdev_retry_count": 3, 00:16:43.249 "ctrlr_loss_timeout_sec": 0, 00:16:43.249 "delay_cmd_submit": true, 00:16:43.249 "dhchap_dhgroups": [ 00:16:43.249 "null", 00:16:43.249 "ffdhe2048", 00:16:43.249 "ffdhe3072", 00:16:43.249 "ffdhe4096", 00:16:43.249 "ffdhe6144", 00:16:43.249 "ffdhe8192" 00:16:43.249 ], 00:16:43.249 "dhchap_digests": [ 00:16:43.249 "sha256", 00:16:43.249 "sha384", 00:16:43.249 "sha512" 00:16:43.249 ], 00:16:43.249 "disable_auto_failback": false, 00:16:43.249 "fast_io_fail_timeout_sec": 0, 00:16:43.249 "generate_uuids": false, 00:16:43.249 "high_priority_weight": 0, 00:16:43.249 "io_path_stat": false, 00:16:43.249 "io_queue_requests": 512, 00:16:43.249 "keep_alive_timeout_ms": 10000, 00:16:43.249 "low_priority_weight": 0, 00:16:43.249 "medium_priority_weight": 0, 00:16:43.249 "nvme_adminq_poll_period_us": 10000, 00:16:43.249 "nvme_error_stat": false, 00:16:43.249 "nvme_ioq_poll_period_us": 0, 00:16:43.249 "rdma_cm_event_timeout_ms": 0, 00:16:43.249 "rdma_max_cq_size": 0, 00:16:43.249 "rdma_srq_size": 0, 00:16:43.249 "reconnect_delay_sec": 0, 00:16:43.249 "timeout_admin_us": 0, 00:16:43.249 "timeout_us": 0, 00:16:43.249 "transport_ack_timeout": 0, 00:16:43.249 "transport_retry_count": 4, 00:16:43.249 "transport_tos": 0 00:16:43.249 } 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "method": "bdev_nvme_attach_controller", 00:16:43.249 "params": { 00:16:43.249 "adrfam": "IPv4", 00:16:43.249 "ctrlr_loss_timeout_sec": 0, 00:16:43.249 "ddgst": false, 00:16:43.249 "fast_io_fail_timeout_sec": 0, 00:16:43.249 "hdgst": false, 00:16:43.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:43.249 "name": "nvme0", 00:16:43.249 "prchk_guard": false, 00:16:43.249 "prchk_reftag": false, 00:16:43.249 "psk": "key0", 00:16:43.249 "reconnect_delay_sec": 0, 00:16:43.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.249 "traddr": "10.0.0.2", 00:16:43.249 "trsvcid": "4420", 00:16:43.249 "trtype": "TCP" 00:16:43.249 } 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "method": "bdev_nvme_set_hotplug", 00:16:43.249 "params": { 00:16:43.249 "enable": false, 00:16:43.249 "period_us": 100000 00:16:43.249 } 00:16:43.249 }, 00:16:43.249 { 00:16:43.249 "method": "bdev_enable_histogram", 00:16:43.250 "params": { 00:16:43.250 "enable": true, 00:16:43.250 "name": "nvme0n1" 00:16:43.250 } 00:16:43.250 }, 00:16:43.250 { 00:16:43.250 "method": "bdev_wait_for_examine" 00:16:43.250 } 00:16:43.250 ] 00:16:43.250 }, 00:16:43.250 { 00:16:43.250 "subsystem": "nbd", 00:16:43.250 "config": [] 00:16:43.250 } 00:16:43.250 ] 00:16:43.250 }' 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 84776 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84776 ']' 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84776 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84776 00:16:43.250 killing process with pid 84776 00:16:43.250 Received shutdown signal, test time was about 1.000000 seconds 00:16:43.250 00:16:43.250 Latency(us) 00:16:43.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.250 =================================================================================================================== 00:16:43.250 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84776' 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84776 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84776 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 84726 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84726 ']' 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84726 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84726 00:16:43.250 killing process with pid 84726 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84726' 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84726 00:16:43.250 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84726 00:16:43.509 15:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:16:43.509 15:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.509 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.509 15:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:16:43.509 "subsystems": [ 00:16:43.509 { 00:16:43.509 "subsystem": "keyring", 00:16:43.509 "config": [ 00:16:43.509 { 00:16:43.509 "method": "keyring_file_add_key", 00:16:43.509 "params": { 00:16:43.509 "name": "key0", 00:16:43.509 "path": "/tmp/tmp.L0ReyH6Ojp" 00:16:43.509 } 00:16:43.509 } 00:16:43.509 ] 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "subsystem": "iobuf", 00:16:43.509 "config": [ 00:16:43.509 { 00:16:43.509 "method": "iobuf_set_options", 00:16:43.509 "params": { 00:16:43.509 "large_bufsize": 135168, 00:16:43.509 "large_pool_count": 1024, 00:16:43.509 "small_bufsize": 8192, 00:16:43.509 "small_pool_count": 8192 00:16:43.509 } 00:16:43.509 } 00:16:43.509 ] 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "subsystem": "sock", 00:16:43.509 "config": [ 00:16:43.509 { 00:16:43.509 "method": "sock_set_default_impl", 00:16:43.509 "params": { 00:16:43.509 "impl_name": "posix" 00:16:43.509 } 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "method": "sock_impl_set_options", 00:16:43.509 "params": { 00:16:43.509 "enable_ktls": false, 00:16:43.509 "enable_placement_id": 0, 00:16:43.509 "enable_quickack": false, 00:16:43.509 "enable_recv_pipe": true, 00:16:43.509 "enable_zerocopy_send_client": false, 00:16:43.509 "enable_zerocopy_send_server": true, 00:16:43.509 "impl_name": "ssl", 00:16:43.509 "recv_buf_size": 4096, 00:16:43.509 "send_buf_size": 4096, 00:16:43.509 "tls_version": 0, 00:16:43.509 "zerocopy_threshold": 0 00:16:43.509 } 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "method": "sock_impl_set_options", 00:16:43.509 "params": { 00:16:43.509 "enable_ktls": false, 00:16:43.509 "enable_placement_id": 0, 00:16:43.509 "enable_quickack": false, 00:16:43.509 "enable_recv_pipe": true, 00:16:43.509 "enable_zerocopy_send_client": false, 00:16:43.509 "enable_zerocopy_send_server": true, 00:16:43.509 "impl_name": "posix", 00:16:43.509 "recv_buf_size": 2097152, 00:16:43.509 "send_buf_size": 2097152, 00:16:43.509 "tls_version": 0, 00:16:43.509 "zerocopy_threshold": 0 00:16:43.509 } 00:16:43.509 } 00:16:43.509 ] 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "subsystem": "vmd", 00:16:43.509 "config": [] 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "subsystem": "accel", 00:16:43.509 "config": [ 00:16:43.509 { 00:16:43.509 "method": "accel_set_options", 00:16:43.509 "params": { 00:16:43.509 "buf_count": 2048, 00:16:43.509 "large_cache_size": 16, 00:16:43.509 "sequence_count": 2048, 00:16:43.509 "small_cache_size": 128, 00:16:43.509 "task_count": 2048 00:16:43.509 } 00:16:43.509 } 00:16:43.509 ] 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "subsystem": "bdev", 00:16:43.509 "config": [ 00:16:43.509 { 00:16:43.509 "method": "bdev_set_options", 00:16:43.509 "params": { 00:16:43.509 "bdev_auto_examine": true, 00:16:43.509 "bdev_io_cache_size": 256, 00:16:43.509 "bdev_io_pool_size": 65535, 00:16:43.509 "iobuf_large_cache_size": 16, 00:16:43.509 "iobuf_small_cache_size": 128 00:16:43.509 } 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "method": "bdev_raid_set_options", 00:16:43.509 "params": { 00:16:43.509 "process_window_size_kb": 1024 00:16:43.509 } 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "method": "bdev_iscsi_set_options", 00:16:43.509 "params": { 00:16:43.509 "timeout_sec": 30 00:16:43.509 } 00:16:43.509 }, 00:16:43.509 { 00:16:43.509 "method": "bdev_nvme_set_options", 00:16:43.509 "params": { 00:16:43.509 "action_on_timeout": "none", 00:16:43.509 "allow_accel_sequence": false, 00:16:43.509 "arbitration_burst": 0, 00:16:43.509 "bdev_retry_count": 3, 00:16:43.509 "ctrlr_loss_timeout_sec": 0, 00:16:43.509 "delay_cmd_submit": true, 00:16:43.509 "dhchap_dhgroups": [ 00:16:43.509 "null", 00:16:43.509 "ffdhe2048", 00:16:43.509 "ffdhe3072", 00:16:43.509 "ffdhe4096", 00:16:43.509 "ffdhe6144", 00:16:43.509 "ffdhe8192" 00:16:43.509 ], 00:16:43.509 "dhchap_digests": [ 00:16:43.509 "sha256", 00:16:43.509 "sha384", 00:16:43.509 "sha512" 00:16:43.509 ], 00:16:43.509 "disable_auto_failback": false, 00:16:43.509 "fast_io_fail_timeout_sec": 0, 00:16:43.509 "generate_uuids": false, 00:16:43.509 "high_priority_weight": 0, 00:16:43.509 "io_path_stat": false, 00:16:43.509 "io_queue_requests": 0, 00:16:43.509 "keep_alive_timeout_ms": 10000, 00:16:43.509 "low_priority_weight": 0, 00:16:43.509 "medium_priority_weight": 0, 00:16:43.509 "nvme_adminq_poll_period_us": 10000, 00:16:43.509 "nvme_error_stat": false, 00:16:43.510 "nvme_ioq_poll_period_us": 0, 00:16:43.510 "rdma_cm_event_timeout_ms": 0, 00:16:43.510 "rdma_max_cq_size": 0, 00:16:43.510 "rdma_srq_size": 0, 00:16:43.510 "reconnect_delay_sec": 0, 00:16:43.510 "timeout_admin_us": 0, 00:16:43.510 "timeout_us": 0, 00:16:43.510 "transport_ack_timeout": 0, 00:16:43.510 "transport_retry_count": 4, 00:16:43.510 "transport_tos": 0 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "bdev_nvme_set_hotplug", 00:16:43.510 "params": { 00:16:43.510 "enable": false, 00:16:43.510 "period_us": 100000 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "bdev_malloc_create", 00:16:43.510 "params": { 00:16:43.510 "block_size": 4096, 00:16:43.510 "name": "malloc0", 00:16:43.510 "num_blocks": 8192, 00:16:43.510 "optimal_io_boundary": 0, 00:16:43.510 "physical_block_size": 4096, 00:16:43.510 "uuid": "41111f64-39a2-4f46-a351-996467b3b064" 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "bdev_wait_for_examine" 00:16:43.510 } 00:16:43.510 ] 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "subsystem": "nbd", 00:16:43.510 "config": [] 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "subsystem": "scheduler", 00:16:43.510 "config": [ 00:16:43.510 { 00:16:43.510 "method": "framework_set_scheduler", 00:16:43.510 "params": { 00:16:43.510 "name": "static" 00:16:43.510 } 00:16:43.510 } 00:16:43.510 ] 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "subsystem": "nvmf", 00:16:43.510 "config": [ 00:16:43.510 { 00:16:43.510 "method": "nvmf_set_config", 00:16:43.510 "params": { 00:16:43.510 "admin_cmd_passthru": { 00:16:43.510 "identify_ctrlr": false 00:16:43.510 }, 00:16:43.510 "discovery_filter": "match_any" 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "nvmf_set_max_subsystems", 00:16:43.510 "params": { 00:16:43.510 "max_subsystems": 1024 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "nvmf_set_crdt", 00:16:43.510 "params": { 00:16:43.510 "crdt1": 0, 00:16:43.510 "crdt2": 0, 00:16:43.510 "crdt3": 0 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "nvmf_create_transport", 00:16:43.510 "params": { 00:16:43.510 "abort_timeout_sec": 1, 00:16:43.510 "ack_timeout": 0, 00:16:43.510 "buf_cache_size": 4294967295, 00:16:43.510 "c2h_success": false, 00:16:43.510 "data_wr_pool_size": 0, 00:16:43.510 "dif_insert_or_strip": false, 00:16:43.510 "in_capsule_data_size": 4096, 00:16:43.510 "io_unit_size": 131072, 00:16:43.510 "max_aq_depth": 128, 00:16:43.510 "max_io_qpairs_per_ctrlr": 127, 00:16:43.510 "max_io_size": 131072, 00:16:43.510 "max_queue_depth": 128, 00:16:43.510 "num_shared_buffers": 511, 00:16:43.510 "sock_priority": 0, 00:16:43.510 "trtype": "TCP", 00:16:43.510 "zcopy": false 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "nvmf_create_subsystem", 00:16:43.510 "params": { 00:16:43.510 "allow_any_host": false, 00:16:43.510 "ana_reporting": false, 00:16:43.510 "max_cntlid": 65519, 00:16:43.510 "max_namespaces": 32, 00:16:43.510 "min_cntlid": 1, 00:16:43.510 "model_number": "SPDK bdev Controller", 00:16:43.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.510 "serial_number": "00000000000000000000" 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "nvmf_subsystem_add_host", 00:16:43.510 "params": { 00:16:43.510 "host": "nqn.2016-06.io.spdk:host1", 00:16:43.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.510 "psk": "key0" 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "nvmf_subsystem_add_ns", 00:16:43.510 "params": { 00:16:43.510 "namespace": { 00:16:43.510 "bdev_name": "malloc0", 00:16:43.510 "nguid": "41111F6439A24F46A351996467B3B064", 00:16:43.510 "no_auto_visible": false, 00:16:43.510 "nsid": 1, 00:16:43.510 "uuid": "41111f64-39a2-4f46-a351-996467b3b064" 00:16:43.510 }, 00:16:43.510 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "method": "nvmf_subsystem_add_listener", 00:16:43.510 "params": { 00:16:43.510 "listen_address": { 00:16:43.510 "adrfam": "IPv4", 00:16:43.510 "traddr": "10.0.0.2", 00:16:43.510 "trsvcid": "4420", 00:16:43.510 "trtype": "TCP" 00:16:43.510 }, 00:16:43.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.510 "secure_channel": true 00:16:43.510 } 00:16:43.510 } 00:16:43.510 ] 00:16:43.510 } 00:16:43.510 ] 00:16:43.510 }' 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84852 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84852 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84852 ']' 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.510 15:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.510 [2024-07-15 15:39:38.598024] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:43.510 [2024-07-15 15:39:38.598646] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.769 [2024-07-15 15:39:38.736039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.769 [2024-07-15 15:39:38.787413] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.769 [2024-07-15 15:39:38.787476] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.769 [2024-07-15 15:39:38.787486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.769 [2024-07-15 15:39:38.787493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.769 [2024-07-15 15:39:38.787500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.769 [2024-07-15 15:39:38.787583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.027 [2024-07-15 15:39:38.973455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.027 [2024-07-15 15:39:39.005397] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:44.027 [2024-07-15 15:39:39.005618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=84896 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 84896 /var/tmp/bdevperf.sock 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84896 ']' 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:44.592 15:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:16:44.592 "subsystems": [ 00:16:44.592 { 00:16:44.592 "subsystem": "keyring", 00:16:44.592 "config": [ 00:16:44.592 { 00:16:44.592 "method": "keyring_file_add_key", 00:16:44.592 "params": { 00:16:44.592 "name": "key0", 00:16:44.592 "path": "/tmp/tmp.L0ReyH6Ojp" 00:16:44.592 } 00:16:44.592 } 00:16:44.592 ] 00:16:44.592 }, 00:16:44.592 { 00:16:44.592 "subsystem": "iobuf", 00:16:44.592 "config": [ 00:16:44.592 { 00:16:44.592 "method": "iobuf_set_options", 00:16:44.592 "params": { 00:16:44.592 "large_bufsize": 135168, 00:16:44.592 "large_pool_count": 1024, 00:16:44.592 "small_bufsize": 8192, 00:16:44.592 "small_pool_count": 8192 00:16:44.592 } 00:16:44.592 } 00:16:44.592 ] 00:16:44.592 }, 00:16:44.592 { 00:16:44.592 "subsystem": "sock", 00:16:44.592 "config": [ 00:16:44.592 { 00:16:44.592 "method": "sock_set_default_impl", 00:16:44.592 "params": { 00:16:44.592 "impl_name": "posix" 00:16:44.592 } 00:16:44.592 }, 00:16:44.592 { 00:16:44.592 "method": "sock_impl_set_options", 00:16:44.592 "params": { 00:16:44.592 "enable_ktls": false, 00:16:44.592 "enable_placement_id": 0, 00:16:44.592 "enable_quickack": false, 00:16:44.592 "enable_recv_pipe": true, 00:16:44.592 "enable_zerocopy_send_client": false, 00:16:44.592 "enable_zerocopy_send_server": true, 00:16:44.592 "impl_name": "ssl", 00:16:44.592 "recv_buf_size": 4096, 00:16:44.592 "send_buf_size": 4096, 00:16:44.592 "tls_version": 0, 00:16:44.592 "zerocopy_threshold": 0 00:16:44.592 } 00:16:44.592 }, 00:16:44.592 { 00:16:44.592 "method": "sock_impl_set_options", 00:16:44.592 "params": { 00:16:44.592 "enable_ktls": false, 00:16:44.592 "enable_placement_id": 0, 00:16:44.592 "enable_quickack": false, 00:16:44.592 "enable_recv_pipe": true, 00:16:44.592 "enable_zerocopy_send_client": false, 00:16:44.592 "enable_zerocopy_send_server": true, 00:16:44.592 "impl_name": "posix", 00:16:44.592 "recv_buf_size": 2097152, 00:16:44.592 "send_buf_size": 2097152, 00:16:44.592 "tls_version": 0, 00:16:44.592 "zerocopy_threshold": 0 00:16:44.592 } 00:16:44.592 } 00:16:44.592 ] 00:16:44.592 }, 00:16:44.592 { 00:16:44.592 "subsystem": "vmd", 00:16:44.592 "config": [] 00:16:44.592 }, 00:16:44.592 { 00:16:44.592 "subsystem": "accel", 00:16:44.592 "config": [ 00:16:44.592 { 00:16:44.592 "method": "accel_set_options", 00:16:44.592 "params": { 00:16:44.592 "buf_count": 2048, 00:16:44.592 "large_cache_size": 16, 00:16:44.592 "sequence_count": 2048, 00:16:44.592 "small_cache_size": 128, 00:16:44.592 "task_count": 2048 00:16:44.592 } 00:16:44.592 } 00:16:44.592 ] 00:16:44.592 }, 00:16:44.592 { 00:16:44.592 "subsystem": "bdev", 00:16:44.592 "config": [ 00:16:44.592 { 00:16:44.592 "method": "bdev_set_options", 00:16:44.592 "params": { 00:16:44.592 "bdev_auto_examine": true, 00:16:44.592 "bdev_io_cache_size": 256, 00:16:44.592 "bdev_io_pool_size": 65535, 00:16:44.592 "iobuf_large_cache_size": 16, 00:16:44.592 "iobuf_small_cache_size": 128 00:16:44.592 } 00:16:44.592 }, 00:16:44.592 { 00:16:44.592 "method": "bdev_raid_set_options", 00:16:44.592 "params": { 00:16:44.592 "process_window_size_kb": 1024 00:16:44.593 } 00:16:44.593 }, 00:16:44.593 { 00:16:44.593 "method": "bdev_iscsi_set_options", 00:16:44.593 "params": { 00:16:44.593 "timeout_sec": 30 00:16:44.593 } 00:16:44.593 }, 00:16:44.593 { 00:16:44.593 "method": "bdev_nvme_set_options", 00:16:44.593 "params": { 00:16:44.593 "action_on_timeout": "none", 00:16:44.593 "allow_accel_sequence": false, 00:16:44.593 "arbitration_burst": 0, 00:16:44.593 "bdev_retry_count": 3, 00:16:44.593 "ctrlr_loss_timeout_sec": 0, 00:16:44.593 "delay_cmd_submit": true, 00:16:44.593 "dhchap_dhgroups": [ 00:16:44.593 "null", 00:16:44.593 "ffdhe2048", 00:16:44.593 "ffdhe3072", 00:16:44.593 "ffdhe4096", 00:16:44.593 "ffdhe6144", 00:16:44.593 "ffdhe8192" 00:16:44.593 ], 00:16:44.593 "dhchap_digests": [ 00:16:44.593 "sha256", 00:16:44.593 "sha384", 00:16:44.593 "sha512" 00:16:44.593 ], 00:16:44.593 "disable_auto_failback": false, 00:16:44.593 "fast_io_fail_timeout_sec": 0, 00:16:44.593 "generate_uuids": false, 00:16:44.593 "high_priority_weight": 0, 00:16:44.593 "io_path_stat": false, 00:16:44.593 "io_queue_requests": 512, 00:16:44.593 "keep_alive_timeout_ms": 10000, 00:16:44.593 "low_priority_weight": 0, 00:16:44.593 "medium_priority_weight": 0, 00:16:44.593 "nvme_adminq_poll_period_us": 10000, 00:16:44.593 "nvme_error_stat": false, 00:16:44.593 "nvme_ioq_poll_period_us": 0, 00:16:44.593 "rdma_cm_event_timeout_ms": 0, 00:16:44.593 "rdma_max_cq_size": 0, 00:16:44.593 "rdma_srq_size": 0, 00:16:44.593 "reconnect_delay_sec": 0, 00:16:44.593 "timeout_admin_us": 0, 00:16:44.593 "timeout_us": 0, 00:16:44.593 "transport_ack_timeout": 0, 00:16:44.593 "transport_retry_count": 4, 00:16:44.593 "transport_tos": 0 00:16:44.593 } 00:16:44.593 }, 00:16:44.593 { 00:16:44.593 "method": "bdev_nvme_attach_controller", 00:16:44.593 "params": { 00:16:44.593 "adrfam": "IPv4", 00:16:44.593 "ctrlr_loss_timeout_sec": 0, 00:16:44.593 "ddgst": false, 00:16:44.593 "fast_io_fail_timeout_sec": 0, 00:16:44.593 "hdgst": false, 00:16:44.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.593 "name": "nvme0", 00:16:44.593 "prchk_guard": false, 00:16:44.593 "prchk_reftag": false, 00:16:44.593 "psk": "key0", 00:16:44.593 "reconnect_delay_sec": 0, 00:16:44.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.593 "traddr": "10.0.0.2", 00:16:44.593 "trsvcid": "4420", 00:16:44.593 "trtype": "TCP" 00:16:44.593 } 00:16:44.593 }, 00:16:44.593 { 00:16:44.593 "method": "bdev_nvme_set_hotplug", 00:16:44.593 "params": { 00:16:44.593 "enable": false, 00:16:44.593 "period_us": 100000 00:16:44.593 } 00:16:44.593 }, 00:16:44.593 { 00:16:44.593 "method": "bdev_enable_histogram", 00:16:44.593 "params": { 00:16:44.593 "enable": true, 00:16:44.593 "name": "nvme0n1" 00:16:44.593 } 00:16:44.593 }, 00:16:44.593 { 00:16:44.593 "method": "bdev_wait_for_examine" 00:16:44.593 } 00:16:44.593 ] 00:16:44.593 }, 00:16:44.593 { 00:16:44.593 "subsystem": "nbd", 00:16:44.593 "config": [] 00:16:44.593 } 00:16:44.593 ] 00:16:44.593 }' 00:16:44.593 [2024-07-15 15:39:39.675612] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:44.593 [2024-07-15 15:39:39.675709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84896 ] 00:16:44.851 [2024-07-15 15:39:39.814724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.851 [2024-07-15 15:39:39.883836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.110 [2024-07-15 15:39:40.021898] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.676 15:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.676 15:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:45.676 15:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:45.676 15:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:16:45.934 15:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.934 15:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:45.934 Running I/O for 1 seconds... 00:16:47.308 00:16:47.308 Latency(us) 00:16:47.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.308 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:47.308 Verification LBA range: start 0x0 length 0x2000 00:16:47.308 nvme0n1 : 1.03 4351.77 17.00 0.00 0.00 29102.37 7179.17 18707.55 00:16:47.308 =================================================================================================================== 00:16:47.308 Total : 4351.77 17.00 0.00 0.00 29102.37 7179.17 18707.55 00:16:47.308 0 00:16:47.308 15:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:16:47.308 15:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:16:47.308 15:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:47.308 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:16:47.308 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:16:47.308 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:47.308 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:47.308 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:47.309 nvmf_trace.0 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 84896 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84896 ']' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84896 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84896 00:16:47.309 killing process with pid 84896 00:16:47.309 Received shutdown signal, test time was about 1.000000 seconds 00:16:47.309 00:16:47.309 Latency(us) 00:16:47.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.309 =================================================================================================================== 00:16:47.309 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84896' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84896 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84896 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.309 rmmod nvme_tcp 00:16:47.309 rmmod nvme_fabrics 00:16:47.309 rmmod nvme_keyring 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 84852 ']' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 84852 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84852 ']' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84852 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84852 00:16:47.309 killing process with pid 84852 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84852' 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84852 00:16:47.309 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84852 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.nOaS5CbqvU /tmp/tmp.jjcMGt5rvw /tmp/tmp.L0ReyH6Ojp 00:16:47.567 00:16:47.567 real 1m19.141s 00:16:47.567 user 2m2.855s 00:16:47.567 sys 0m27.010s 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:47.567 15:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.567 ************************************ 00:16:47.567 END TEST nvmf_tls 00:16:47.567 ************************************ 00:16:47.567 15:39:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:47.567 15:39:42 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:47.567 15:39:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:47.567 15:39:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.567 15:39:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:47.567 ************************************ 00:16:47.567 START TEST nvmf_fips 00:16:47.567 ************************************ 00:16:47.567 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:47.827 * Looking for test storage... 00:16:47.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.827 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:16:47.828 Error setting digest 00:16:47.828 00A2EBF4A27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:47.828 00A2EBF4A27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.828 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.829 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:48.102 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:48.102 Cannot find device "nvmf_tgt_br" 00:16:48.102 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:16:48.102 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.102 Cannot find device "nvmf_tgt_br2" 00:16:48.102 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:16:48.102 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:48.102 15:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:48.102 Cannot find device "nvmf_tgt_br" 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:48.102 Cannot find device "nvmf_tgt_br2" 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:48.102 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:48.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:48.360 00:16:48.360 --- 10.0.0.2 ping statistics --- 00:16:48.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.360 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:48.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:48.360 00:16:48.360 --- 10.0.0.3 ping statistics --- 00:16:48.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.360 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:48.360 00:16:48.360 --- 10.0.0.1 ping statistics --- 00:16:48.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.360 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85169 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85169 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85169 ']' 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.360 15:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:48.360 [2024-07-15 15:39:43.397185] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:48.360 [2024-07-15 15:39:43.397280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.618 [2024-07-15 15:39:43.538309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.618 [2024-07-15 15:39:43.607326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.619 [2024-07-15 15:39:43.607383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.619 [2024-07-15 15:39:43.607398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.619 [2024-07-15 15:39:43.607408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.619 [2024-07-15 15:39:43.607417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.619 [2024-07-15 15:39:43.607446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:49.553 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.553 [2024-07-15 15:39:44.660020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.553 [2024-07-15 15:39:44.675937] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.553 [2024-07-15 15:39:44.676115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.812 [2024-07-15 15:39:44.702337] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:49.812 malloc0 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85234 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85234 /var/tmp/bdevperf.sock 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85234 ']' 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.812 15:39:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:49.812 [2024-07-15 15:39:44.812073] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:49.812 [2024-07-15 15:39:44.812151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85234 ] 00:16:50.070 [2024-07-15 15:39:44.953449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.070 [2024-07-15 15:39:45.022769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.637 15:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.637 15:39:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:50.896 15:39:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:50.896 [2024-07-15 15:39:45.966226] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:50.896 [2024-07-15 15:39:45.966337] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:51.154 TLSTESTn1 00:16:51.155 15:39:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:51.155 Running I/O for 10 seconds... 00:17:01.125 00:17:01.125 Latency(us) 00:17:01.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.125 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:01.125 Verification LBA range: start 0x0 length 0x2000 00:17:01.125 TLSTESTn1 : 10.02 4458.90 17.42 0.00 0.00 28652.11 6464.23 20256.58 00:17:01.125 =================================================================================================================== 00:17:01.125 Total : 4458.90 17.42 0.00 0.00 28652.11 6464.23 20256.58 00:17:01.125 0 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:01.125 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:01.125 nvmf_trace.0 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85234 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85234 ']' 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85234 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85234 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:01.384 killing process with pid 85234 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85234' 00:17:01.384 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.384 00:17:01.384 Latency(us) 00:17:01.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.384 =================================================================================================================== 00:17:01.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85234 00:17:01.384 [2024-07-15 15:39:56.300404] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85234 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.384 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.384 rmmod nvme_tcp 00:17:01.384 rmmod nvme_fabrics 00:17:01.643 rmmod nvme_keyring 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85169 ']' 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85169 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85169 ']' 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85169 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85169 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85169' 00:17:01.643 killing process with pid 85169 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85169 00:17:01.643 [2024-07-15 15:39:56.583886] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85169 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:01.643 00:17:01.643 real 0m14.113s 00:17:01.643 user 0m18.981s 00:17:01.643 sys 0m5.716s 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.643 15:39:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:01.643 ************************************ 00:17:01.643 END TEST nvmf_fips 00:17:01.643 ************************************ 00:17:01.902 15:39:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:01.902 15:39:56 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:17:01.902 15:39:56 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:17:01.902 15:39:56 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:17:01.902 15:39:56 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:01.902 15:39:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.902 15:39:56 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:17:01.902 15:39:56 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:01.902 15:39:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.902 15:39:56 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:17:01.902 15:39:56 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:01.902 15:39:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:01.902 15:39:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.902 15:39:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.902 ************************************ 00:17:01.902 START TEST nvmf_multicontroller 00:17:01.902 ************************************ 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:01.902 * Looking for test storage... 00:17:01.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:01.902 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:01.903 15:39:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:01.903 Cannot find device "nvmf_tgt_br" 00:17:01.903 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:17:01.903 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.903 Cannot find device "nvmf_tgt_br2" 00:17:01.903 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:17:01.903 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:01.903 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:01.903 Cannot find device "nvmf_tgt_br" 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:02.161 Cannot find device "nvmf_tgt_br2" 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.161 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:02.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:02.161 00:17:02.161 --- 10.0.0.2 ping statistics --- 00:17:02.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.162 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:02.162 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:02.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:17:02.162 00:17:02.162 --- 10.0.0.3 ping statistics --- 00:17:02.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.162 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:02.162 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:02.420 00:17:02.420 --- 10.0.0.1 ping statistics --- 00:17:02.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.420 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:02.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85597 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85597 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85597 ']' 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.420 15:39:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:02.420 [2024-07-15 15:39:57.368659] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:02.421 [2024-07-15 15:39:57.368747] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.421 [2024-07-15 15:39:57.501311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.680 [2024-07-15 15:39:57.557181] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.680 [2024-07-15 15:39:57.557457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.680 [2024-07-15 15:39:57.557672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.680 [2024-07-15 15:39:57.557802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.680 [2024-07-15 15:39:57.557837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.680 [2024-07-15 15:39:57.558037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.680 [2024-07-15 15:39:57.558575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.680 [2024-07-15 15:39:57.558586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.249 [2024-07-15 15:39:58.356294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.249 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 Malloc0 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 [2024-07-15 15:39:58.407362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 [2024-07-15 15:39:58.415283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 Malloc1 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85649 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85649 /var/tmp/bdevperf.sock 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85649 ']' 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.508 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:03.766 NVMe0n1 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.766 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.025 1 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:04.025 2024/07/15 15:39:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:04.025 request: 00:17:04.025 { 00:17:04.025 "method": "bdev_nvme_attach_controller", 00:17:04.025 "params": { 00:17:04.025 "name": "NVMe0", 00:17:04.025 "trtype": "tcp", 00:17:04.025 "traddr": "10.0.0.2", 00:17:04.025 "adrfam": "ipv4", 00:17:04.025 "trsvcid": "4420", 00:17:04.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.025 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:04.025 "hostaddr": "10.0.0.2", 00:17:04.025 "hostsvcid": "60000", 00:17:04.025 "prchk_reftag": false, 00:17:04.025 "prchk_guard": false, 00:17:04.025 "hdgst": false, 00:17:04.025 "ddgst": false 00:17:04.025 } 00:17:04.025 } 00:17:04.025 Got JSON-RPC error response 00:17:04.025 GoRPCClient: error on JSON-RPC call 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:04.025 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:04.026 2024/07/15 15:39:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:04.026 request: 00:17:04.026 { 00:17:04.026 "method": "bdev_nvme_attach_controller", 00:17:04.026 "params": { 00:17:04.026 "name": "NVMe0", 00:17:04.026 "trtype": "tcp", 00:17:04.026 "traddr": "10.0.0.2", 00:17:04.026 "adrfam": "ipv4", 00:17:04.026 "trsvcid": "4420", 00:17:04.026 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:04.026 "hostaddr": "10.0.0.2", 00:17:04.026 "hostsvcid": "60000", 00:17:04.026 "prchk_reftag": false, 00:17:04.026 "prchk_guard": false, 00:17:04.026 "hdgst": false, 00:17:04.026 "ddgst": false 00:17:04.026 } 00:17:04.026 } 00:17:04.026 Got JSON-RPC error response 00:17:04.026 GoRPCClient: error on JSON-RPC call 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:04.026 2024/07/15 15:39:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:17:04.026 request: 00:17:04.026 { 00:17:04.026 "method": "bdev_nvme_attach_controller", 00:17:04.026 "params": { 00:17:04.026 "name": "NVMe0", 00:17:04.026 "trtype": "tcp", 00:17:04.026 "traddr": "10.0.0.2", 00:17:04.026 "adrfam": "ipv4", 00:17:04.026 "trsvcid": "4420", 00:17:04.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.026 "hostaddr": "10.0.0.2", 00:17:04.026 "hostsvcid": "60000", 00:17:04.026 "prchk_reftag": false, 00:17:04.026 "prchk_guard": false, 00:17:04.026 "hdgst": false, 00:17:04.026 "ddgst": false, 00:17:04.026 "multipath": "disable" 00:17:04.026 } 00:17:04.026 } 00:17:04.026 Got JSON-RPC error response 00:17:04.026 GoRPCClient: error on JSON-RPC call 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:04.026 2024/07/15 15:39:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:04.026 request: 00:17:04.026 { 00:17:04.026 "method": "bdev_nvme_attach_controller", 00:17:04.026 "params": { 00:17:04.026 "name": "NVMe0", 00:17:04.026 "trtype": "tcp", 00:17:04.026 "traddr": "10.0.0.2", 00:17:04.026 "adrfam": "ipv4", 00:17:04.026 "trsvcid": "4420", 00:17:04.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.026 "hostaddr": "10.0.0.2", 00:17:04.026 "hostsvcid": "60000", 00:17:04.026 "prchk_reftag": false, 00:17:04.026 "prchk_guard": false, 00:17:04.026 "hdgst": false, 00:17:04.026 "ddgst": false, 00:17:04.026 "multipath": "failover" 00:17:04.026 } 00:17:04.026 } 00:17:04.026 Got JSON-RPC error response 00:17:04.026 GoRPCClient: error on JSON-RPC call 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.026 15:39:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:04.026 00:17:04.026 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.026 15:39:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:04.026 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.026 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:04.026 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.026 15:39:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:04.026 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.026 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:04.026 00:17:04.027 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.027 15:39:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:04.027 15:39:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:04.027 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.027 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:04.027 15:39:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.027 15:39:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:04.027 15:39:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:05.423 0 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85649 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85649 ']' 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85649 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85649 00:17:05.423 killing process with pid 85649 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85649' 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85649 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85649 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:17:05.423 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:05.423 [2024-07-15 15:39:58.524592] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:05.423 [2024-07-15 15:39:58.524690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85649 ] 00:17:05.423 [2024-07-15 15:39:58.663216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.423 [2024-07-15 15:39:58.731103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.423 [2024-07-15 15:39:59.125000] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 15b919ea-187a-40f7-8318-f16b0a21aaaa already exists 00:17:05.423 [2024-07-15 15:39:59.125051] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:15b919ea-187a-40f7-8318-f16b0a21aaaa alias for bdev NVMe1n1 00:17:05.423 [2024-07-15 15:39:59.125100] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:05.423 Running I/O for 1 seconds... 00:17:05.423 00:17:05.423 Latency(us) 00:17:05.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.423 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:05.423 NVMe0n1 : 1.01 20950.53 81.84 0.00 0.00 6100.88 3127.85 11558.17 00:17:05.423 =================================================================================================================== 00:17:05.423 Total : 20950.53 81.84 0.00 0.00 6100.88 3127.85 11558.17 00:17:05.423 Received shutdown signal, test time was about 1.000000 seconds 00:17:05.423 00:17:05.423 Latency(us) 00:17:05.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.423 =================================================================================================================== 00:17:05.423 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.423 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.423 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.682 rmmod nvme_tcp 00:17:05.682 rmmod nvme_fabrics 00:17:05.682 rmmod nvme_keyring 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85597 ']' 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85597 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85597 ']' 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85597 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85597 00:17:05.682 killing process with pid 85597 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85597' 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85597 00:17:05.682 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85597 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:05.941 ************************************ 00:17:05.941 END TEST nvmf_multicontroller 00:17:05.941 ************************************ 00:17:05.941 00:17:05.941 real 0m4.000s 00:17:05.941 user 0m12.045s 00:17:05.941 sys 0m0.923s 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.941 15:40:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:05.941 15:40:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:05.941 15:40:00 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:05.941 15:40:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:05.941 15:40:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.941 15:40:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:05.941 ************************************ 00:17:05.941 START TEST nvmf_aer 00:17:05.941 ************************************ 00:17:05.941 15:40:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:05.941 * Looking for test storage... 00:17:05.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:05.941 15:40:00 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.941 15:40:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:05.941 15:40:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.941 15:40:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.941 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.941 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.941 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.941 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.941 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.941 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.941 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.941 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:05.942 Cannot find device "nvmf_tgt_br" 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.942 Cannot find device "nvmf_tgt_br2" 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:05.942 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:06.201 Cannot find device "nvmf_tgt_br" 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:06.201 Cannot find device "nvmf_tgt_br2" 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:06.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:17:06.201 00:17:06.201 --- 10.0.0.2 ping statistics --- 00:17:06.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.201 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:06.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:06.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:17:06.201 00:17:06.201 --- 10.0.0.3 ping statistics --- 00:17:06.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.201 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:06.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:06.201 00:17:06.201 --- 10.0.0.1 ping statistics --- 00:17:06.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.201 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:06.201 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=85879 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 85879 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 85879 ']' 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.459 15:40:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:06.459 [2024-07-15 15:40:01.414778] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:06.459 [2024-07-15 15:40:01.415680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.459 [2024-07-15 15:40:01.559539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.717 [2024-07-15 15:40:01.612001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.717 [2024-07-15 15:40:01.612054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.717 [2024-07-15 15:40:01.612064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.717 [2024-07-15 15:40:01.612071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.717 [2024-07-15 15:40:01.612077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.717 [2024-07-15 15:40:01.612201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.717 [2024-07-15 15:40:01.612515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.717 [2024-07-15 15:40:01.612846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.717 [2024-07-15 15:40:01.612909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.284 [2024-07-15 15:40:02.382243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.284 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.543 Malloc0 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.543 [2024-07-15 15:40:02.443465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.543 [ 00:17:07.543 { 00:17:07.543 "allow_any_host": true, 00:17:07.543 "hosts": [], 00:17:07.543 "listen_addresses": [], 00:17:07.543 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:07.543 "subtype": "Discovery" 00:17:07.543 }, 00:17:07.543 { 00:17:07.543 "allow_any_host": true, 00:17:07.543 "hosts": [], 00:17:07.543 "listen_addresses": [ 00:17:07.543 { 00:17:07.543 "adrfam": "IPv4", 00:17:07.543 "traddr": "10.0.0.2", 00:17:07.543 "trsvcid": "4420", 00:17:07.543 "trtype": "TCP" 00:17:07.543 } 00:17:07.543 ], 00:17:07.543 "max_cntlid": 65519, 00:17:07.543 "max_namespaces": 2, 00:17:07.543 "min_cntlid": 1, 00:17:07.543 "model_number": "SPDK bdev Controller", 00:17:07.543 "namespaces": [ 00:17:07.543 { 00:17:07.543 "bdev_name": "Malloc0", 00:17:07.543 "name": "Malloc0", 00:17:07.543 "nguid": "16F910FB4C4B4C9494C76F06AC3975D7", 00:17:07.543 "nsid": 1, 00:17:07.543 "uuid": "16f910fb-4c4b-4c94-94c7-6f06ac3975d7" 00:17:07.543 } 00:17:07.543 ], 00:17:07.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.543 "serial_number": "SPDK00000000000001", 00:17:07.543 "subtype": "NVMe" 00:17:07.543 } 00:17:07.543 ] 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=85933 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:17:07.543 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 Malloc1 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 Asynchronous Event Request test 00:17:07.801 Attaching to 10.0.0.2 00:17:07.801 Attached to 10.0.0.2 00:17:07.801 Registering asynchronous event callbacks... 00:17:07.801 Starting namespace attribute notice tests for all controllers... 00:17:07.801 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:07.801 aer_cb - Changed Namespace 00:17:07.801 Cleaning up... 00:17:07.801 [ 00:17:07.801 { 00:17:07.801 "allow_any_host": true, 00:17:07.801 "hosts": [], 00:17:07.801 "listen_addresses": [], 00:17:07.801 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:07.801 "subtype": "Discovery" 00:17:07.801 }, 00:17:07.801 { 00:17:07.801 "allow_any_host": true, 00:17:07.801 "hosts": [], 00:17:07.801 "listen_addresses": [ 00:17:07.801 { 00:17:07.801 "adrfam": "IPv4", 00:17:07.801 "traddr": "10.0.0.2", 00:17:07.801 "trsvcid": "4420", 00:17:07.801 "trtype": "TCP" 00:17:07.801 } 00:17:07.801 ], 00:17:07.801 "max_cntlid": 65519, 00:17:07.801 "max_namespaces": 2, 00:17:07.801 "min_cntlid": 1, 00:17:07.801 "model_number": "SPDK bdev Controller", 00:17:07.801 "namespaces": [ 00:17:07.801 { 00:17:07.801 "bdev_name": "Malloc0", 00:17:07.801 "name": "Malloc0", 00:17:07.801 "nguid": "16F910FB4C4B4C9494C76F06AC3975D7", 00:17:07.801 "nsid": 1, 00:17:07.801 "uuid": "16f910fb-4c4b-4c94-94c7-6f06ac3975d7" 00:17:07.801 }, 00:17:07.801 { 00:17:07.801 "bdev_name": "Malloc1", 00:17:07.801 "name": "Malloc1", 00:17:07.801 "nguid": "D58CC9ACFBA548629E23955FBE8B0C86", 00:17:07.801 "nsid": 2, 00:17:07.801 "uuid": "d58cc9ac-fba5-4862-9e23-955fbe8b0c86" 00:17:07.801 } 00:17:07.801 ], 00:17:07.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.801 "serial_number": "SPDK00000000000001", 00:17:07.801 "subtype": "NVMe" 00:17:07.801 } 00:17:07.801 ] 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 85933 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.801 rmmod nvme_tcp 00:17:07.801 rmmod nvme_fabrics 00:17:07.801 rmmod nvme_keyring 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 85879 ']' 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 85879 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 85879 ']' 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 85879 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85879 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.801 killing process with pid 85879 00:17:07.801 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85879' 00:17:07.802 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 85879 00:17:07.802 15:40:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 85879 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:08.060 ************************************ 00:17:08.060 END TEST nvmf_aer 00:17:08.060 ************************************ 00:17:08.060 00:17:08.060 real 0m2.215s 00:17:08.060 user 0m6.090s 00:17:08.060 sys 0m0.577s 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.060 15:40:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:08.060 15:40:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:08.060 15:40:03 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:08.060 15:40:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:08.060 15:40:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.060 15:40:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.060 ************************************ 00:17:08.060 START TEST nvmf_async_init 00:17:08.060 ************************************ 00:17:08.060 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:08.318 * Looking for test storage... 00:17:08.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=215df4caa8494a9f92ca9a9b468f7e79 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:08.318 Cannot find device "nvmf_tgt_br" 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:08.318 Cannot find device "nvmf_tgt_br2" 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:08.318 Cannot find device "nvmf_tgt_br" 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:08.318 Cannot find device "nvmf_tgt_br2" 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:08.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:08.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.318 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:08.319 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:08.319 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:08.319 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:08.319 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:08.319 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:08.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:08.577 00:17:08.577 --- 10.0.0.2 ping statistics --- 00:17:08.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.577 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:08.577 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:08.577 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:08.577 00:17:08.577 --- 10.0.0.3 ping statistics --- 00:17:08.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.577 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:08.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:08.577 00:17:08.577 --- 10.0.0.1 ping statistics --- 00:17:08.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.577 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86100 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86100 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86100 ']' 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:08.577 15:40:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:08.577 [2024-07-15 15:40:03.683349] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:08.577 [2024-07-15 15:40:03.683441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.836 [2024-07-15 15:40:03.822454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.836 [2024-07-15 15:40:03.876675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.836 [2024-07-15 15:40:03.876725] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.836 [2024-07-15 15:40:03.876735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.836 [2024-07-15 15:40:03.876742] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.836 [2024-07-15 15:40:03.876748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.836 [2024-07-15 15:40:03.876768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:09.770 [2024-07-15 15:40:04.664173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:09.770 null0 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 215df4caa8494a9f92ca9a9b468f7e79 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.770 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:09.770 [2024-07-15 15:40:04.704217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.771 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.771 15:40:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:09.771 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.771 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.030 nvme0n1 00:17:10.030 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.030 15:40:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:10.030 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.030 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.030 [ 00:17:10.030 { 00:17:10.030 "aliases": [ 00:17:10.030 "215df4ca-a849-4a9f-92ca-9a9b468f7e79" 00:17:10.030 ], 00:17:10.030 "assigned_rate_limits": { 00:17:10.030 "r_mbytes_per_sec": 0, 00:17:10.030 "rw_ios_per_sec": 0, 00:17:10.030 "rw_mbytes_per_sec": 0, 00:17:10.030 "w_mbytes_per_sec": 0 00:17:10.030 }, 00:17:10.030 "block_size": 512, 00:17:10.030 "claimed": false, 00:17:10.030 "driver_specific": { 00:17:10.030 "mp_policy": "active_passive", 00:17:10.030 "nvme": [ 00:17:10.030 { 00:17:10.030 "ctrlr_data": { 00:17:10.030 "ana_reporting": false, 00:17:10.030 "cntlid": 1, 00:17:10.030 "firmware_revision": "24.09", 00:17:10.030 "model_number": "SPDK bdev Controller", 00:17:10.030 "multi_ctrlr": true, 00:17:10.030 "oacs": { 00:17:10.030 "firmware": 0, 00:17:10.030 "format": 0, 00:17:10.030 "ns_manage": 0, 00:17:10.030 "security": 0 00:17:10.030 }, 00:17:10.030 "serial_number": "00000000000000000000", 00:17:10.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.030 "vendor_id": "0x8086" 00:17:10.030 }, 00:17:10.030 "ns_data": { 00:17:10.030 "can_share": true, 00:17:10.030 "id": 1 00:17:10.030 }, 00:17:10.030 "trid": { 00:17:10.030 "adrfam": "IPv4", 00:17:10.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.030 "traddr": "10.0.0.2", 00:17:10.030 "trsvcid": "4420", 00:17:10.030 "trtype": "TCP" 00:17:10.030 }, 00:17:10.030 "vs": { 00:17:10.030 "nvme_version": "1.3" 00:17:10.030 } 00:17:10.030 } 00:17:10.030 ] 00:17:10.030 }, 00:17:10.030 "memory_domains": [ 00:17:10.030 { 00:17:10.030 "dma_device_id": "system", 00:17:10.030 "dma_device_type": 1 00:17:10.030 } 00:17:10.030 ], 00:17:10.030 "name": "nvme0n1", 00:17:10.030 "num_blocks": 2097152, 00:17:10.030 "product_name": "NVMe disk", 00:17:10.030 "supported_io_types": { 00:17:10.030 "abort": true, 00:17:10.030 "compare": true, 00:17:10.030 "compare_and_write": true, 00:17:10.030 "copy": true, 00:17:10.030 "flush": true, 00:17:10.030 "get_zone_info": false, 00:17:10.030 "nvme_admin": true, 00:17:10.030 "nvme_io": true, 00:17:10.030 "nvme_io_md": false, 00:17:10.030 "nvme_iov_md": false, 00:17:10.030 "read": true, 00:17:10.030 "reset": true, 00:17:10.030 "seek_data": false, 00:17:10.030 "seek_hole": false, 00:17:10.030 "unmap": false, 00:17:10.030 "write": true, 00:17:10.030 "write_zeroes": true, 00:17:10.030 "zcopy": false, 00:17:10.030 "zone_append": false, 00:17:10.030 "zone_management": false 00:17:10.030 }, 00:17:10.030 "uuid": "215df4ca-a849-4a9f-92ca-9a9b468f7e79", 00:17:10.030 "zoned": false 00:17:10.030 } 00:17:10.030 ] 00:17:10.030 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.030 15:40:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:10.030 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.030 15:40:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.030 [2024-07-15 15:40:04.972441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:10.030 [2024-07-15 15:40:04.972533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7ca30 (9): Bad file descriptor 00:17:10.030 [2024-07-15 15:40:05.104644] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.030 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.030 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:10.030 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.030 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.030 [ 00:17:10.030 { 00:17:10.030 "aliases": [ 00:17:10.030 "215df4ca-a849-4a9f-92ca-9a9b468f7e79" 00:17:10.030 ], 00:17:10.030 "assigned_rate_limits": { 00:17:10.030 "r_mbytes_per_sec": 0, 00:17:10.030 "rw_ios_per_sec": 0, 00:17:10.030 "rw_mbytes_per_sec": 0, 00:17:10.030 "w_mbytes_per_sec": 0 00:17:10.030 }, 00:17:10.030 "block_size": 512, 00:17:10.030 "claimed": false, 00:17:10.030 "driver_specific": { 00:17:10.030 "mp_policy": "active_passive", 00:17:10.030 "nvme": [ 00:17:10.030 { 00:17:10.030 "ctrlr_data": { 00:17:10.030 "ana_reporting": false, 00:17:10.030 "cntlid": 2, 00:17:10.030 "firmware_revision": "24.09", 00:17:10.030 "model_number": "SPDK bdev Controller", 00:17:10.030 "multi_ctrlr": true, 00:17:10.030 "oacs": { 00:17:10.030 "firmware": 0, 00:17:10.030 "format": 0, 00:17:10.030 "ns_manage": 0, 00:17:10.030 "security": 0 00:17:10.030 }, 00:17:10.030 "serial_number": "00000000000000000000", 00:17:10.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.030 "vendor_id": "0x8086" 00:17:10.030 }, 00:17:10.030 "ns_data": { 00:17:10.030 "can_share": true, 00:17:10.030 "id": 1 00:17:10.030 }, 00:17:10.030 "trid": { 00:17:10.030 "adrfam": "IPv4", 00:17:10.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.030 "traddr": "10.0.0.2", 00:17:10.030 "trsvcid": "4420", 00:17:10.030 "trtype": "TCP" 00:17:10.030 }, 00:17:10.030 "vs": { 00:17:10.030 "nvme_version": "1.3" 00:17:10.030 } 00:17:10.030 } 00:17:10.030 ] 00:17:10.030 }, 00:17:10.030 "memory_domains": [ 00:17:10.030 { 00:17:10.030 "dma_device_id": "system", 00:17:10.030 "dma_device_type": 1 00:17:10.030 } 00:17:10.030 ], 00:17:10.030 "name": "nvme0n1", 00:17:10.030 "num_blocks": 2097152, 00:17:10.030 "product_name": "NVMe disk", 00:17:10.030 "supported_io_types": { 00:17:10.030 "abort": true, 00:17:10.030 "compare": true, 00:17:10.030 "compare_and_write": true, 00:17:10.030 "copy": true, 00:17:10.030 "flush": true, 00:17:10.030 "get_zone_info": false, 00:17:10.030 "nvme_admin": true, 00:17:10.031 "nvme_io": true, 00:17:10.031 "nvme_io_md": false, 00:17:10.031 "nvme_iov_md": false, 00:17:10.031 "read": true, 00:17:10.031 "reset": true, 00:17:10.031 "seek_data": false, 00:17:10.031 "seek_hole": false, 00:17:10.031 "unmap": false, 00:17:10.031 "write": true, 00:17:10.031 "write_zeroes": true, 00:17:10.031 "zcopy": false, 00:17:10.031 "zone_append": false, 00:17:10.031 "zone_management": false 00:17:10.031 }, 00:17:10.031 "uuid": "215df4ca-a849-4a9f-92ca-9a9b468f7e79", 00:17:10.031 "zoned": false 00:17:10.031 } 00:17:10.031 ] 00:17:10.031 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.031 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.031 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.031 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.031 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.031 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:10.031 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.IJHsdeQ4lF 00:17:10.031 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:10.031 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.IJHsdeQ4lF 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.290 [2024-07-15 15:40:05.172609] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:10.290 [2024-07-15 15:40:05.172730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IJHsdeQ4lF 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.290 [2024-07-15 15:40:05.180635] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IJHsdeQ4lF 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.290 [2024-07-15 15:40:05.188628] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.290 [2024-07-15 15:40:05.188690] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:10.290 nvme0n1 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.290 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.290 [ 00:17:10.290 { 00:17:10.290 "aliases": [ 00:17:10.290 "215df4ca-a849-4a9f-92ca-9a9b468f7e79" 00:17:10.290 ], 00:17:10.290 "assigned_rate_limits": { 00:17:10.290 "r_mbytes_per_sec": 0, 00:17:10.290 "rw_ios_per_sec": 0, 00:17:10.290 "rw_mbytes_per_sec": 0, 00:17:10.290 "w_mbytes_per_sec": 0 00:17:10.290 }, 00:17:10.290 "block_size": 512, 00:17:10.290 "claimed": false, 00:17:10.290 "driver_specific": { 00:17:10.290 "mp_policy": "active_passive", 00:17:10.290 "nvme": [ 00:17:10.290 { 00:17:10.290 "ctrlr_data": { 00:17:10.290 "ana_reporting": false, 00:17:10.290 "cntlid": 3, 00:17:10.290 "firmware_revision": "24.09", 00:17:10.290 "model_number": "SPDK bdev Controller", 00:17:10.290 "multi_ctrlr": true, 00:17:10.290 "oacs": { 00:17:10.290 "firmware": 0, 00:17:10.290 "format": 0, 00:17:10.290 "ns_manage": 0, 00:17:10.290 "security": 0 00:17:10.290 }, 00:17:10.290 "serial_number": "00000000000000000000", 00:17:10.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.290 "vendor_id": "0x8086" 00:17:10.290 }, 00:17:10.290 "ns_data": { 00:17:10.290 "can_share": true, 00:17:10.290 "id": 1 00:17:10.290 }, 00:17:10.290 "trid": { 00:17:10.290 "adrfam": "IPv4", 00:17:10.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.291 "traddr": "10.0.0.2", 00:17:10.291 "trsvcid": "4421", 00:17:10.291 "trtype": "TCP" 00:17:10.291 }, 00:17:10.291 "vs": { 00:17:10.291 "nvme_version": "1.3" 00:17:10.291 } 00:17:10.291 } 00:17:10.291 ] 00:17:10.291 }, 00:17:10.291 "memory_domains": [ 00:17:10.291 { 00:17:10.291 "dma_device_id": "system", 00:17:10.291 "dma_device_type": 1 00:17:10.291 } 00:17:10.291 ], 00:17:10.291 "name": "nvme0n1", 00:17:10.291 "num_blocks": 2097152, 00:17:10.291 "product_name": "NVMe disk", 00:17:10.291 "supported_io_types": { 00:17:10.291 "abort": true, 00:17:10.291 "compare": true, 00:17:10.291 "compare_and_write": true, 00:17:10.291 "copy": true, 00:17:10.291 "flush": true, 00:17:10.291 "get_zone_info": false, 00:17:10.291 "nvme_admin": true, 00:17:10.291 "nvme_io": true, 00:17:10.291 "nvme_io_md": false, 00:17:10.291 "nvme_iov_md": false, 00:17:10.291 "read": true, 00:17:10.291 "reset": true, 00:17:10.291 "seek_data": false, 00:17:10.291 "seek_hole": false, 00:17:10.291 "unmap": false, 00:17:10.291 "write": true, 00:17:10.291 "write_zeroes": true, 00:17:10.291 "zcopy": false, 00:17:10.291 "zone_append": false, 00:17:10.291 "zone_management": false 00:17:10.291 }, 00:17:10.291 "uuid": "215df4ca-a849-4a9f-92ca-9a9b468f7e79", 00:17:10.291 "zoned": false 00:17:10.291 } 00:17:10.291 ] 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.IJHsdeQ4lF 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:10.291 rmmod nvme_tcp 00:17:10.291 rmmod nvme_fabrics 00:17:10.291 rmmod nvme_keyring 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86100 ']' 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86100 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86100 ']' 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86100 00:17:10.291 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86100 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:10.550 killing process with pid 86100 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86100' 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86100 00:17:10.550 [2024-07-15 15:40:05.446758] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:10.550 [2024-07-15 15:40:05.446790] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86100 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:10.550 00:17:10.550 real 0m2.454s 00:17:10.550 user 0m2.341s 00:17:10.550 sys 0m0.516s 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.550 15:40:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:10.550 ************************************ 00:17:10.550 END TEST nvmf_async_init 00:17:10.550 ************************************ 00:17:10.550 15:40:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:10.550 15:40:05 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:10.550 15:40:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:10.550 15:40:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.550 15:40:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.810 ************************************ 00:17:10.810 START TEST dma 00:17:10.810 ************************************ 00:17:10.810 15:40:05 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:10.810 * Looking for test storage... 00:17:10.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:10.810 15:40:05 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.810 15:40:05 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.810 15:40:05 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.810 15:40:05 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.810 15:40:05 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.810 15:40:05 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.810 15:40:05 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.810 15:40:05 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:17:10.810 15:40:05 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.810 15:40:05 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.810 15:40:05 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:10.810 15:40:05 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:17:10.810 00:17:10.810 real 0m0.103s 00:17:10.810 user 0m0.057s 00:17:10.810 sys 0m0.052s 00:17:10.810 15:40:05 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.810 15:40:05 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:17:10.810 ************************************ 00:17:10.810 END TEST dma 00:17:10.810 ************************************ 00:17:10.810 15:40:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:10.810 15:40:05 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:10.810 15:40:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:10.810 15:40:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.810 15:40:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.810 ************************************ 00:17:10.810 START TEST nvmf_identify 00:17:10.810 ************************************ 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:10.810 * Looking for test storage... 00:17:10.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.810 15:40:05 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.811 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:11.070 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:11.070 Cannot find device "nvmf_tgt_br" 00:17:11.071 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:11.071 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.071 Cannot find device "nvmf_tgt_br2" 00:17:11.071 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:11.071 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:11.071 15:40:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:11.071 Cannot find device "nvmf_tgt_br" 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:11.071 Cannot find device "nvmf_tgt_br2" 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:11.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:11.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:11.071 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:11.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:17:11.330 00:17:11.330 --- 10.0.0.2 ping statistics --- 00:17:11.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.330 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:11.330 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:11.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:17:11.330 00:17:11.330 --- 10.0.0.3 ping statistics --- 00:17:11.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.331 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:11.331 00:17:11.331 --- 10.0.0.1 ping statistics --- 00:17:11.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.331 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86369 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86369 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86369 ']' 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.331 15:40:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:11.331 [2024-07-15 15:40:06.388985] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:11.331 [2024-07-15 15:40:06.389076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.596 [2024-07-15 15:40:06.527836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:11.596 [2024-07-15 15:40:06.581614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.596 [2024-07-15 15:40:06.581659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.596 [2024-07-15 15:40:06.581668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.596 [2024-07-15 15:40:06.581674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.596 [2024-07-15 15:40:06.581680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.596 [2024-07-15 15:40:06.581838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.596 [2024-07-15 15:40:06.584576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.596 [2024-07-15 15:40:06.584700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.597 [2024-07-15 15:40:06.584710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:12.558 [2024-07-15 15:40:07.388791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:12.558 Malloc0 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:12.558 [2024-07-15 15:40:07.482815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:12.558 [ 00:17:12.558 { 00:17:12.558 "allow_any_host": true, 00:17:12.558 "hosts": [], 00:17:12.558 "listen_addresses": [ 00:17:12.558 { 00:17:12.558 "adrfam": "IPv4", 00:17:12.558 "traddr": "10.0.0.2", 00:17:12.558 "trsvcid": "4420", 00:17:12.558 "trtype": "TCP" 00:17:12.558 } 00:17:12.558 ], 00:17:12.558 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:12.558 "subtype": "Discovery" 00:17:12.558 }, 00:17:12.558 { 00:17:12.558 "allow_any_host": true, 00:17:12.558 "hosts": [], 00:17:12.558 "listen_addresses": [ 00:17:12.558 { 00:17:12.558 "adrfam": "IPv4", 00:17:12.558 "traddr": "10.0.0.2", 00:17:12.558 "trsvcid": "4420", 00:17:12.558 "trtype": "TCP" 00:17:12.558 } 00:17:12.558 ], 00:17:12.558 "max_cntlid": 65519, 00:17:12.558 "max_namespaces": 32, 00:17:12.558 "min_cntlid": 1, 00:17:12.558 "model_number": "SPDK bdev Controller", 00:17:12.558 "namespaces": [ 00:17:12.558 { 00:17:12.558 "bdev_name": "Malloc0", 00:17:12.558 "eui64": "ABCDEF0123456789", 00:17:12.558 "name": "Malloc0", 00:17:12.558 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:12.558 "nsid": 1, 00:17:12.558 "uuid": "2975d394-d5b5-4f39-84d6-1ccd0061b447" 00:17:12.558 } 00:17:12.558 ], 00:17:12.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.558 "serial_number": "SPDK00000000000001", 00:17:12.558 "subtype": "NVMe" 00:17:12.558 } 00:17:12.558 ] 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.558 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:12.558 [2024-07-15 15:40:07.541987] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:12.558 [2024-07-15 15:40:07.542046] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86422 ] 00:17:12.558 [2024-07-15 15:40:07.681702] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:12.558 [2024-07-15 15:40:07.681760] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:12.558 [2024-07-15 15:40:07.681767] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:12.558 [2024-07-15 15:40:07.681779] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:12.558 [2024-07-15 15:40:07.681785] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:12.558 [2024-07-15 15:40:07.682105] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:12.558 [2024-07-15 15:40:07.682165] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ef9a60 0 00:17:12.820 [2024-07-15 15:40:07.694557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:12.820 [2024-07-15 15:40:07.694595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:12.820 [2024-07-15 15:40:07.694602] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:12.820 [2024-07-15 15:40:07.694605] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:12.820 [2024-07-15 15:40:07.694646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.694654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.694658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.820 [2024-07-15 15:40:07.694670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:12.820 [2024-07-15 15:40:07.694707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.820 [2024-07-15 15:40:07.701582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.820 [2024-07-15 15:40:07.701609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.820 [2024-07-15 15:40:07.701615] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.701621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.820 [2024-07-15 15:40:07.701631] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:12.820 [2024-07-15 15:40:07.701639] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:12.820 [2024-07-15 15:40:07.701646] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:12.820 [2024-07-15 15:40:07.701664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.701669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.701673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.820 [2024-07-15 15:40:07.701682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.820 [2024-07-15 15:40:07.701727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.820 [2024-07-15 15:40:07.701849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.820 [2024-07-15 15:40:07.701857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.820 [2024-07-15 15:40:07.701861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.701866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.820 [2024-07-15 15:40:07.701872] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:12.820 [2024-07-15 15:40:07.701880] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:12.820 [2024-07-15 15:40:07.701888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.701893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.701897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.820 [2024-07-15 15:40:07.701905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.820 [2024-07-15 15:40:07.701925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.820 [2024-07-15 15:40:07.701998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.820 [2024-07-15 15:40:07.702005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.820 [2024-07-15 15:40:07.702009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.702014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.820 [2024-07-15 15:40:07.702020] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:12.820 [2024-07-15 15:40:07.702043] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:12.820 [2024-07-15 15:40:07.702051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.702055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.702059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.820 [2024-07-15 15:40:07.702066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.820 [2024-07-15 15:40:07.702084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.820 [2024-07-15 15:40:07.702137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.820 [2024-07-15 15:40:07.702144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.820 [2024-07-15 15:40:07.702148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.702152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.820 [2024-07-15 15:40:07.702158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:12.820 [2024-07-15 15:40:07.702168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.702173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.702177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.820 [2024-07-15 15:40:07.702184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.820 [2024-07-15 15:40:07.702201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.820 [2024-07-15 15:40:07.702249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.820 [2024-07-15 15:40:07.702256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.820 [2024-07-15 15:40:07.702260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.820 [2024-07-15 15:40:07.702264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.820 [2024-07-15 15:40:07.702269] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:12.820 [2024-07-15 15:40:07.702274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:12.820 [2024-07-15 15:40:07.702282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:12.820 [2024-07-15 15:40:07.702388] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:12.820 [2024-07-15 15:40:07.702394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:12.820 [2024-07-15 15:40:07.702403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.702419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.821 [2024-07-15 15:40:07.702438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.821 [2024-07-15 15:40:07.702489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.821 [2024-07-15 15:40:07.702496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.821 [2024-07-15 15:40:07.702500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.821 [2024-07-15 15:40:07.702509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:12.821 [2024-07-15 15:40:07.702520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.702554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.821 [2024-07-15 15:40:07.702588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.821 [2024-07-15 15:40:07.702640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.821 [2024-07-15 15:40:07.702646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.821 [2024-07-15 15:40:07.702650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.821 [2024-07-15 15:40:07.702660] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:12.821 [2024-07-15 15:40:07.702665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:12.821 [2024-07-15 15:40:07.702673] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:12.821 [2024-07-15 15:40:07.702684] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:12.821 [2024-07-15 15:40:07.702696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.702735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.821 [2024-07-15 15:40:07.702758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.821 [2024-07-15 15:40:07.702852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:12.821 [2024-07-15 15:40:07.702860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:12.821 [2024-07-15 15:40:07.702864] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702869] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef9a60): datao=0, datal=4096, cccid=0 00:17:12.821 [2024-07-15 15:40:07.702874] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3c840) on tqpair(0x1ef9a60): expected_datao=0, payload_size=4096 00:17:12.821 [2024-07-15 15:40:07.702880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702888] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702893] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.821 [2024-07-15 15:40:07.702908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.821 [2024-07-15 15:40:07.702912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.821 [2024-07-15 15:40:07.702926] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:12.821 [2024-07-15 15:40:07.702932] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:12.821 [2024-07-15 15:40:07.702937] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:12.821 [2024-07-15 15:40:07.702943] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:12.821 [2024-07-15 15:40:07.702948] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:12.821 [2024-07-15 15:40:07.702955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:12.821 [2024-07-15 15:40:07.702964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:12.821 [2024-07-15 15:40:07.702972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.702982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.702990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:12.821 [2024-07-15 15:40:07.703010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.821 [2024-07-15 15:40:07.703086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.821 [2024-07-15 15:40:07.703093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.821 [2024-07-15 15:40:07.703097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.821 [2024-07-15 15:40:07.703109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.703125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.821 [2024-07-15 15:40:07.703132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.703146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.821 [2024-07-15 15:40:07.703153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.703167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.821 [2024-07-15 15:40:07.703173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.703187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.821 [2024-07-15 15:40:07.703193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:12.821 [2024-07-15 15:40:07.703206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:12.821 [2024-07-15 15:40:07.703215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.703226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.821 [2024-07-15 15:40:07.703246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c840, cid 0, qid 0 00:17:12.821 [2024-07-15 15:40:07.703253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3c9c0, cid 1, qid 0 00:17:12.821 [2024-07-15 15:40:07.703258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3cb40, cid 2, qid 0 00:17:12.821 [2024-07-15 15:40:07.703264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.821 [2024-07-15 15:40:07.703269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ce40, cid 4, qid 0 00:17:12.821 [2024-07-15 15:40:07.703373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.821 [2024-07-15 15:40:07.703379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.821 [2024-07-15 15:40:07.703383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ce40) on tqpair=0x1ef9a60 00:17:12.821 [2024-07-15 15:40:07.703392] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:12.821 [2024-07-15 15:40:07.703401] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:12.821 [2024-07-15 15:40:07.703414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703419] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef9a60) 00:17:12.821 [2024-07-15 15:40:07.703426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.821 [2024-07-15 15:40:07.703445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ce40, cid 4, qid 0 00:17:12.821 [2024-07-15 15:40:07.703506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:12.821 [2024-07-15 15:40:07.703513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:12.821 [2024-07-15 15:40:07.703517] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703520] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef9a60): datao=0, datal=4096, cccid=4 00:17:12.821 [2024-07-15 15:40:07.703525] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3ce40) on tqpair(0x1ef9a60): expected_datao=0, payload_size=4096 00:17:12.821 [2024-07-15 15:40:07.703530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703537] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703541] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.821 [2024-07-15 15:40:07.703555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.821 [2024-07-15 15:40:07.703559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ce40) on tqpair=0x1ef9a60 00:17:12.821 [2024-07-15 15:40:07.703591] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:12.821 [2024-07-15 15:40:07.703619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.821 [2024-07-15 15:40:07.703626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef9a60) 00:17:12.822 [2024-07-15 15:40:07.703633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.822 [2024-07-15 15:40:07.703641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.703645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.703649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef9a60) 00:17:12.822 [2024-07-15 15:40:07.703655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.822 [2024-07-15 15:40:07.703680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ce40, cid 4, qid 0 00:17:12.822 [2024-07-15 15:40:07.703688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3cfc0, cid 5, qid 0 00:17:12.822 [2024-07-15 15:40:07.703782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:12.822 [2024-07-15 15:40:07.703790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:12.822 [2024-07-15 15:40:07.703793] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.703797] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef9a60): datao=0, datal=1024, cccid=4 00:17:12.822 [2024-07-15 15:40:07.703802] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3ce40) on tqpair(0x1ef9a60): expected_datao=0, payload_size=1024 00:17:12.822 [2024-07-15 15:40:07.703807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.703814] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.703817] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.703823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.822 [2024-07-15 15:40:07.703829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.822 [2024-07-15 15:40:07.703833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.703837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3cfc0) on tqpair=0x1ef9a60 00:17:12.822 [2024-07-15 15:40:07.744570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.822 [2024-07-15 15:40:07.744592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.822 [2024-07-15 15:40:07.744598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ce40) on tqpair=0x1ef9a60 00:17:12.822 [2024-07-15 15:40:07.744618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef9a60) 00:17:12.822 [2024-07-15 15:40:07.744632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.822 [2024-07-15 15:40:07.744664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ce40, cid 4, qid 0 00:17:12.822 [2024-07-15 15:40:07.744736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:12.822 [2024-07-15 15:40:07.744743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:12.822 [2024-07-15 15:40:07.744747] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744750] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef9a60): datao=0, datal=3072, cccid=4 00:17:12.822 [2024-07-15 15:40:07.744771] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3ce40) on tqpair(0x1ef9a60): expected_datao=0, payload_size=3072 00:17:12.822 [2024-07-15 15:40:07.744776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744800] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744804] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.822 [2024-07-15 15:40:07.744818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.822 [2024-07-15 15:40:07.744822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ce40) on tqpair=0x1ef9a60 00:17:12.822 [2024-07-15 15:40:07.744837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef9a60) 00:17:12.822 [2024-07-15 15:40:07.744850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.822 [2024-07-15 15:40:07.744875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ce40, cid 4, qid 0 00:17:12.822 [2024-07-15 15:40:07.744945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:12.822 [2024-07-15 15:40:07.744952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:12.822 [2024-07-15 15:40:07.744956] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744960] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef9a60): datao=0, datal=8, cccid=4 00:17:12.822 [2024-07-15 15:40:07.744965] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3ce40) on tqpair(0x1ef9a60): expected_datao=0, payload_size=8 00:17:12.822 [2024-07-15 15:40:07.744969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744976] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.744980] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.785599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.822 [2024-07-15 15:40:07.785621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.822 [2024-07-15 15:40:07.785627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.822 [2024-07-15 15:40:07.785631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ce40) on tqpair=0x1ef9a60 00:17:12.822 ===================================================== 00:17:12.822 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:12.822 ===================================================== 00:17:12.822 Controller Capabilities/Features 00:17:12.822 ================================ 00:17:12.822 Vendor ID: 0000 00:17:12.822 Subsystem Vendor ID: 0000 00:17:12.822 Serial Number: .................... 00:17:12.822 Model Number: ........................................ 00:17:12.822 Firmware Version: 24.09 00:17:12.822 Recommended Arb Burst: 0 00:17:12.822 IEEE OUI Identifier: 00 00 00 00:17:12.822 Multi-path I/O 00:17:12.822 May have multiple subsystem ports: No 00:17:12.822 May have multiple controllers: No 00:17:12.822 Associated with SR-IOV VF: No 00:17:12.822 Max Data Transfer Size: 131072 00:17:12.822 Max Number of Namespaces: 0 00:17:12.822 Max Number of I/O Queues: 1024 00:17:12.822 NVMe Specification Version (VS): 1.3 00:17:12.822 NVMe Specification Version (Identify): 1.3 00:17:12.822 Maximum Queue Entries: 128 00:17:12.822 Contiguous Queues Required: Yes 00:17:12.822 Arbitration Mechanisms Supported 00:17:12.822 Weighted Round Robin: Not Supported 00:17:12.822 Vendor Specific: Not Supported 00:17:12.822 Reset Timeout: 15000 ms 00:17:12.822 Doorbell Stride: 4 bytes 00:17:12.822 NVM Subsystem Reset: Not Supported 00:17:12.822 Command Sets Supported 00:17:12.822 NVM Command Set: Supported 00:17:12.822 Boot Partition: Not Supported 00:17:12.822 Memory Page Size Minimum: 4096 bytes 00:17:12.822 Memory Page Size Maximum: 4096 bytes 00:17:12.822 Persistent Memory Region: Not Supported 00:17:12.822 Optional Asynchronous Events Supported 00:17:12.822 Namespace Attribute Notices: Not Supported 00:17:12.822 Firmware Activation Notices: Not Supported 00:17:12.822 ANA Change Notices: Not Supported 00:17:12.822 PLE Aggregate Log Change Notices: Not Supported 00:17:12.822 LBA Status Info Alert Notices: Not Supported 00:17:12.822 EGE Aggregate Log Change Notices: Not Supported 00:17:12.822 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.822 Zone Descriptor Change Notices: Not Supported 00:17:12.822 Discovery Log Change Notices: Supported 00:17:12.822 Controller Attributes 00:17:12.822 128-bit Host Identifier: Not Supported 00:17:12.822 Non-Operational Permissive Mode: Not Supported 00:17:12.822 NVM Sets: Not Supported 00:17:12.822 Read Recovery Levels: Not Supported 00:17:12.822 Endurance Groups: Not Supported 00:17:12.822 Predictable Latency Mode: Not Supported 00:17:12.822 Traffic Based Keep ALive: Not Supported 00:17:12.822 Namespace Granularity: Not Supported 00:17:12.822 SQ Associations: Not Supported 00:17:12.822 UUID List: Not Supported 00:17:12.822 Multi-Domain Subsystem: Not Supported 00:17:12.822 Fixed Capacity Management: Not Supported 00:17:12.822 Variable Capacity Management: Not Supported 00:17:12.822 Delete Endurance Group: Not Supported 00:17:12.822 Delete NVM Set: Not Supported 00:17:12.822 Extended LBA Formats Supported: Not Supported 00:17:12.822 Flexible Data Placement Supported: Not Supported 00:17:12.822 00:17:12.822 Controller Memory Buffer Support 00:17:12.822 ================================ 00:17:12.822 Supported: No 00:17:12.822 00:17:12.822 Persistent Memory Region Support 00:17:12.822 ================================ 00:17:12.822 Supported: No 00:17:12.822 00:17:12.822 Admin Command Set Attributes 00:17:12.822 ============================ 00:17:12.822 Security Send/Receive: Not Supported 00:17:12.822 Format NVM: Not Supported 00:17:12.822 Firmware Activate/Download: Not Supported 00:17:12.822 Namespace Management: Not Supported 00:17:12.822 Device Self-Test: Not Supported 00:17:12.822 Directives: Not Supported 00:17:12.822 NVMe-MI: Not Supported 00:17:12.822 Virtualization Management: Not Supported 00:17:12.822 Doorbell Buffer Config: Not Supported 00:17:12.822 Get LBA Status Capability: Not Supported 00:17:12.822 Command & Feature Lockdown Capability: Not Supported 00:17:12.822 Abort Command Limit: 1 00:17:12.822 Async Event Request Limit: 4 00:17:12.822 Number of Firmware Slots: N/A 00:17:12.822 Firmware Slot 1 Read-Only: N/A 00:17:12.822 Firmware Activation Without Reset: N/A 00:17:12.822 Multiple Update Detection Support: N/A 00:17:12.822 Firmware Update Granularity: No Information Provided 00:17:12.822 Per-Namespace SMART Log: No 00:17:12.822 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.822 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:12.822 Command Effects Log Page: Not Supported 00:17:12.822 Get Log Page Extended Data: Supported 00:17:12.822 Telemetry Log Pages: Not Supported 00:17:12.822 Persistent Event Log Pages: Not Supported 00:17:12.822 Supported Log Pages Log Page: May Support 00:17:12.823 Commands Supported & Effects Log Page: Not Supported 00:17:12.823 Feature Identifiers & Effects Log Page:May Support 00:17:12.823 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.823 Data Area 4 for Telemetry Log: Not Supported 00:17:12.823 Error Log Page Entries Supported: 128 00:17:12.823 Keep Alive: Not Supported 00:17:12.823 00:17:12.823 NVM Command Set Attributes 00:17:12.823 ========================== 00:17:12.823 Submission Queue Entry Size 00:17:12.823 Max: 1 00:17:12.823 Min: 1 00:17:12.823 Completion Queue Entry Size 00:17:12.823 Max: 1 00:17:12.823 Min: 1 00:17:12.823 Number of Namespaces: 0 00:17:12.823 Compare Command: Not Supported 00:17:12.823 Write Uncorrectable Command: Not Supported 00:17:12.823 Dataset Management Command: Not Supported 00:17:12.823 Write Zeroes Command: Not Supported 00:17:12.823 Set Features Save Field: Not Supported 00:17:12.823 Reservations: Not Supported 00:17:12.823 Timestamp: Not Supported 00:17:12.823 Copy: Not Supported 00:17:12.823 Volatile Write Cache: Not Present 00:17:12.823 Atomic Write Unit (Normal): 1 00:17:12.823 Atomic Write Unit (PFail): 1 00:17:12.823 Atomic Compare & Write Unit: 1 00:17:12.823 Fused Compare & Write: Supported 00:17:12.823 Scatter-Gather List 00:17:12.823 SGL Command Set: Supported 00:17:12.823 SGL Keyed: Supported 00:17:12.823 SGL Bit Bucket Descriptor: Not Supported 00:17:12.823 SGL Metadata Pointer: Not Supported 00:17:12.823 Oversized SGL: Not Supported 00:17:12.823 SGL Metadata Address: Not Supported 00:17:12.823 SGL Offset: Supported 00:17:12.823 Transport SGL Data Block: Not Supported 00:17:12.823 Replay Protected Memory Block: Not Supported 00:17:12.823 00:17:12.823 Firmware Slot Information 00:17:12.823 ========================= 00:17:12.823 Active slot: 0 00:17:12.823 00:17:12.823 00:17:12.823 Error Log 00:17:12.823 ========= 00:17:12.823 00:17:12.823 Active Namespaces 00:17:12.823 ================= 00:17:12.823 Discovery Log Page 00:17:12.823 ================== 00:17:12.823 Generation Counter: 2 00:17:12.823 Number of Records: 2 00:17:12.823 Record Format: 0 00:17:12.823 00:17:12.823 Discovery Log Entry 0 00:17:12.823 ---------------------- 00:17:12.823 Transport Type: 3 (TCP) 00:17:12.823 Address Family: 1 (IPv4) 00:17:12.823 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:12.823 Entry Flags: 00:17:12.823 Duplicate Returned Information: 1 00:17:12.823 Explicit Persistent Connection Support for Discovery: 1 00:17:12.823 Transport Requirements: 00:17:12.823 Secure Channel: Not Required 00:17:12.823 Port ID: 0 (0x0000) 00:17:12.823 Controller ID: 65535 (0xffff) 00:17:12.823 Admin Max SQ Size: 128 00:17:12.823 Transport Service Identifier: 4420 00:17:12.823 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:12.823 Transport Address: 10.0.0.2 00:17:12.823 Discovery Log Entry 1 00:17:12.823 ---------------------- 00:17:12.823 Transport Type: 3 (TCP) 00:17:12.823 Address Family: 1 (IPv4) 00:17:12.823 Subsystem Type: 2 (NVM Subsystem) 00:17:12.823 Entry Flags: 00:17:12.823 Duplicate Returned Information: 0 00:17:12.823 Explicit Persistent Connection Support for Discovery: 0 00:17:12.823 Transport Requirements: 00:17:12.823 Secure Channel: Not Required 00:17:12.823 Port ID: 0 (0x0000) 00:17:12.823 Controller ID: 65535 (0xffff) 00:17:12.823 Admin Max SQ Size: 128 00:17:12.823 Transport Service Identifier: 4420 00:17:12.823 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:12.823 Transport Address: 10.0.0.2 [2024-07-15 15:40:07.785721] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:12.823 [2024-07-15 15:40:07.785734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c840) on tqpair=0x1ef9a60 00:17:12.823 [2024-07-15 15:40:07.785741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.823 [2024-07-15 15:40:07.785747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3c9c0) on tqpair=0x1ef9a60 00:17:12.823 [2024-07-15 15:40:07.785751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.823 [2024-07-15 15:40:07.785756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3cb40) on tqpair=0x1ef9a60 00:17:12.823 [2024-07-15 15:40:07.785761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.823 [2024-07-15 15:40:07.785766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.823 [2024-07-15 15:40:07.785770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.823 [2024-07-15 15:40:07.785780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.785785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.785788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.823 [2024-07-15 15:40:07.785812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.823 [2024-07-15 15:40:07.785853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.823 [2024-07-15 15:40:07.785903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.823 [2024-07-15 15:40:07.785911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.823 [2024-07-15 15:40:07.785915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.785919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.823 [2024-07-15 15:40:07.785927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.785932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.785935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.823 [2024-07-15 15:40:07.785943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.823 [2024-07-15 15:40:07.785966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.823 [2024-07-15 15:40:07.786034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.823 [2024-07-15 15:40:07.786040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.823 [2024-07-15 15:40:07.786044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.823 [2024-07-15 15:40:07.786054] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:12.823 [2024-07-15 15:40:07.786059] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:12.823 [2024-07-15 15:40:07.786069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.823 [2024-07-15 15:40:07.786085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.823 [2024-07-15 15:40:07.786102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.823 [2024-07-15 15:40:07.786153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.823 [2024-07-15 15:40:07.786160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.823 [2024-07-15 15:40:07.786163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.823 [2024-07-15 15:40:07.786179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.823 [2024-07-15 15:40:07.786195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.823 [2024-07-15 15:40:07.786212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.823 [2024-07-15 15:40:07.786258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.823 [2024-07-15 15:40:07.786265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.823 [2024-07-15 15:40:07.786268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.823 [2024-07-15 15:40:07.786283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.823 [2024-07-15 15:40:07.786299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.823 [2024-07-15 15:40:07.786316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.823 [2024-07-15 15:40:07.786368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.823 [2024-07-15 15:40:07.786375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.823 [2024-07-15 15:40:07.786378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.823 [2024-07-15 15:40:07.786393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.823 [2024-07-15 15:40:07.786401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.823 [2024-07-15 15:40:07.786409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.823 [2024-07-15 15:40:07.786425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.823 [2024-07-15 15:40:07.786473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.786479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.786483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.786497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.786513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.786530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.786598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.786608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.786612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.786627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.786643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.786664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.786740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.786748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.786752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.786767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.786784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.786803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.786858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.786865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.786869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.786884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.786901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.786918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.786970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.786976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.786980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.786984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.786995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.787011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.787028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.787093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.787100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.787103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.787118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.787133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.787150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.787200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.787207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.787211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.787225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.787241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.787258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.787307] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.787313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.787317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.787332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.787348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.787364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.787415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.787422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.787425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.787440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.787455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.787472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.787523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.787530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.824 [2024-07-15 15:40:07.787534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.787538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.824 [2024-07-15 15:40:07.787548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.791579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:12.824 [2024-07-15 15:40:07.791587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef9a60) 00:17:12.824 [2024-07-15 15:40:07.791596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.824 [2024-07-15 15:40:07.791624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3ccc0, cid 3, qid 0 00:17:12.824 [2024-07-15 15:40:07.791684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:12.824 [2024-07-15 15:40:07.791692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:12.825 [2024-07-15 15:40:07.791696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:12.825 [2024-07-15 15:40:07.791700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3ccc0) on tqpair=0x1ef9a60 00:17:12.825 [2024-07-15 15:40:07.791709] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:12.825 00:17:12.825 15:40:07 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:12.825 [2024-07-15 15:40:07.824236] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:12.825 [2024-07-15 15:40:07.824289] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86424 ] 00:17:13.089 [2024-07-15 15:40:07.966571] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:13.089 [2024-07-15 15:40:07.966627] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:13.089 [2024-07-15 15:40:07.966635] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:13.089 [2024-07-15 15:40:07.966646] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:13.089 [2024-07-15 15:40:07.966653] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:13.089 [2024-07-15 15:40:07.966798] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:13.089 [2024-07-15 15:40:07.966847] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f38a60 0 00:17:13.089 [2024-07-15 15:40:07.982559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:13.089 [2024-07-15 15:40:07.982582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:13.089 [2024-07-15 15:40:07.982588] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:13.089 [2024-07-15 15:40:07.982592] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:13.089 [2024-07-15 15:40:07.982634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.982641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.982646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.089 [2024-07-15 15:40:07.982658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:13.089 [2024-07-15 15:40:07.982688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.089 [2024-07-15 15:40:07.990574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.089 [2024-07-15 15:40:07.990597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.089 [2024-07-15 15:40:07.990603] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.990608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.089 [2024-07-15 15:40:07.990620] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:13.089 [2024-07-15 15:40:07.990628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:13.089 [2024-07-15 15:40:07.990636] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:13.089 [2024-07-15 15:40:07.990670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.990676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.990680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.089 [2024-07-15 15:40:07.990689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.089 [2024-07-15 15:40:07.990746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.089 [2024-07-15 15:40:07.990820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.089 [2024-07-15 15:40:07.990828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.089 [2024-07-15 15:40:07.990832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.990837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.089 [2024-07-15 15:40:07.990843] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:13.089 [2024-07-15 15:40:07.990852] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:13.089 [2024-07-15 15:40:07.990860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.990865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.990869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.089 [2024-07-15 15:40:07.990877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.089 [2024-07-15 15:40:07.990899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.089 [2024-07-15 15:40:07.990959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.089 [2024-07-15 15:40:07.990966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.089 [2024-07-15 15:40:07.990970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.990975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.089 [2024-07-15 15:40:07.990981] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:13.089 [2024-07-15 15:40:07.990990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:13.089 [2024-07-15 15:40:07.990998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.089 [2024-07-15 15:40:07.991015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.089 [2024-07-15 15:40:07.991049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.089 [2024-07-15 15:40:07.991122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.089 [2024-07-15 15:40:07.991129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.089 [2024-07-15 15:40:07.991133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.089 [2024-07-15 15:40:07.991143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:13.089 [2024-07-15 15:40:07.991154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.089 [2024-07-15 15:40:07.991170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.089 [2024-07-15 15:40:07.991188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.089 [2024-07-15 15:40:07.991238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.089 [2024-07-15 15:40:07.991245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.089 [2024-07-15 15:40:07.991249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.089 [2024-07-15 15:40:07.991259] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:13.089 [2024-07-15 15:40:07.991264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:13.089 [2024-07-15 15:40:07.991272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:13.089 [2024-07-15 15:40:07.991378] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:13.089 [2024-07-15 15:40:07.991383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:13.089 [2024-07-15 15:40:07.991392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.089 [2024-07-15 15:40:07.991409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.089 [2024-07-15 15:40:07.991429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.089 [2024-07-15 15:40:07.991484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.089 [2024-07-15 15:40:07.991491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.089 [2024-07-15 15:40:07.991495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.089 [2024-07-15 15:40:07.991505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:13.089 [2024-07-15 15:40:07.991516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.089 [2024-07-15 15:40:07.991533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.089 [2024-07-15 15:40:07.991558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.089 [2024-07-15 15:40:07.991623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.089 [2024-07-15 15:40:07.991632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.089 [2024-07-15 15:40:07.991637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.089 [2024-07-15 15:40:07.991646] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:13.089 [2024-07-15 15:40:07.991652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:13.089 [2024-07-15 15:40:07.991661] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:13.089 [2024-07-15 15:40:07.991672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:13.089 [2024-07-15 15:40:07.991683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.089 [2024-07-15 15:40:07.991696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.089 [2024-07-15 15:40:07.991718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.089 [2024-07-15 15:40:07.991814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:13.089 [2024-07-15 15:40:07.991821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:13.089 [2024-07-15 15:40:07.991825] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:13.089 [2024-07-15 15:40:07.991829] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38a60): datao=0, datal=4096, cccid=0 00:17:13.089 [2024-07-15 15:40:07.991835] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7b840) on tqpair(0x1f38a60): expected_datao=0, payload_size=4096 00:17:13.089 [2024-07-15 15:40:07.991840] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.991848] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.991852] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.991861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.090 [2024-07-15 15:40:07.991868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.090 [2024-07-15 15:40:07.991872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.991876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.090 [2024-07-15 15:40:07.991885] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:13.090 [2024-07-15 15:40:07.991890] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:13.090 [2024-07-15 15:40:07.991896] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:13.090 [2024-07-15 15:40:07.991900] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:13.090 [2024-07-15 15:40:07.991906] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:13.090 [2024-07-15 15:40:07.991911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.991921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.991928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.991933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.991937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.090 [2024-07-15 15:40:07.991946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:13.090 [2024-07-15 15:40:07.991967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.090 [2024-07-15 15:40:07.992026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.090 [2024-07-15 15:40:07.992033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.090 [2024-07-15 15:40:07.992037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.090 [2024-07-15 15:40:07.992050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38a60) 00:17:13.090 [2024-07-15 15:40:07.992066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.090 [2024-07-15 15:40:07.992073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f38a60) 00:17:13.090 [2024-07-15 15:40:07.992087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.090 [2024-07-15 15:40:07.992094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f38a60) 00:17:13.090 [2024-07-15 15:40:07.992108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.090 [2024-07-15 15:40:07.992114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992119] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.090 [2024-07-15 15:40:07.992129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.090 [2024-07-15 15:40:07.992134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.992148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.992156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38a60) 00:17:13.090 [2024-07-15 15:40:07.992183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.090 [2024-07-15 15:40:07.992206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b840, cid 0, qid 0 00:17:13.090 [2024-07-15 15:40:07.992214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b9c0, cid 1, qid 0 00:17:13.090 [2024-07-15 15:40:07.992219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bb40, cid 2, qid 0 00:17:13.090 [2024-07-15 15:40:07.992225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.090 [2024-07-15 15:40:07.992230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be40, cid 4, qid 0 00:17:13.090 [2024-07-15 15:40:07.992323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.090 [2024-07-15 15:40:07.992330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.090 [2024-07-15 15:40:07.992334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be40) on tqpair=0x1f38a60 00:17:13.090 [2024-07-15 15:40:07.992345] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:13.090 [2024-07-15 15:40:07.992354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.992364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.992371] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.992379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38a60) 00:17:13.090 [2024-07-15 15:40:07.992409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:13.090 [2024-07-15 15:40:07.992430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be40, cid 4, qid 0 00:17:13.090 [2024-07-15 15:40:07.992483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.090 [2024-07-15 15:40:07.992490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.090 [2024-07-15 15:40:07.992495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be40) on tqpair=0x1f38a60 00:17:13.090 [2024-07-15 15:40:07.992579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.992610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.992620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38a60) 00:17:13.090 [2024-07-15 15:40:07.992633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.090 [2024-07-15 15:40:07.992660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be40, cid 4, qid 0 00:17:13.090 [2024-07-15 15:40:07.992738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:13.090 [2024-07-15 15:40:07.992746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:13.090 [2024-07-15 15:40:07.992750] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992754] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38a60): datao=0, datal=4096, cccid=4 00:17:13.090 [2024-07-15 15:40:07.992759] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7be40) on tqpair(0x1f38a60): expected_datao=0, payload_size=4096 00:17:13.090 [2024-07-15 15:40:07.992764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992772] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992776] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.090 [2024-07-15 15:40:07.992791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.090 [2024-07-15 15:40:07.992795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be40) on tqpair=0x1f38a60 00:17:13.090 [2024-07-15 15:40:07.992817] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:13.090 [2024-07-15 15:40:07.992828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.992839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.992848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38a60) 00:17:13.090 [2024-07-15 15:40:07.992860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.090 [2024-07-15 15:40:07.992882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be40, cid 4, qid 0 00:17:13.090 [2024-07-15 15:40:07.992977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:13.090 [2024-07-15 15:40:07.992984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:13.090 [2024-07-15 15:40:07.992988] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.992993] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38a60): datao=0, datal=4096, cccid=4 00:17:13.090 [2024-07-15 15:40:07.992998] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7be40) on tqpair(0x1f38a60): expected_datao=0, payload_size=4096 00:17:13.090 [2024-07-15 15:40:07.993003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.993010] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.993015] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.993024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.090 [2024-07-15 15:40:07.993030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.090 [2024-07-15 15:40:07.993034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.090 [2024-07-15 15:40:07.993039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be40) on tqpair=0x1f38a60 00:17:13.090 [2024-07-15 15:40:07.993055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.993067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:13.090 [2024-07-15 15:40:07.993076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.993089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.091 [2024-07-15 15:40:07.993111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be40, cid 4, qid 0 00:17:13.091 [2024-07-15 15:40:07.993178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:13.091 [2024-07-15 15:40:07.993186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:13.091 [2024-07-15 15:40:07.993190] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993194] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38a60): datao=0, datal=4096, cccid=4 00:17:13.091 [2024-07-15 15:40:07.993199] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7be40) on tqpair(0x1f38a60): expected_datao=0, payload_size=4096 00:17:13.091 [2024-07-15 15:40:07.993204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993212] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993216] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.091 [2024-07-15 15:40:07.993232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.091 [2024-07-15 15:40:07.993236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be40) on tqpair=0x1f38a60 00:17:13.091 [2024-07-15 15:40:07.993250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:13.091 [2024-07-15 15:40:07.993259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:13.091 [2024-07-15 15:40:07.993270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:13.091 [2024-07-15 15:40:07.993277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:13.091 [2024-07-15 15:40:07.993283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:13.091 [2024-07-15 15:40:07.993289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:13.091 [2024-07-15 15:40:07.993296] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:13.091 [2024-07-15 15:40:07.993301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:13.091 [2024-07-15 15:40:07.993307] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:13.091 [2024-07-15 15:40:07.993322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.993336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.091 [2024-07-15 15:40:07.993343] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.993359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.091 [2024-07-15 15:40:07.993385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be40, cid 4, qid 0 00:17:13.091 [2024-07-15 15:40:07.993394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bfc0, cid 5, qid 0 00:17:13.091 [2024-07-15 15:40:07.993481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.091 [2024-07-15 15:40:07.993488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.091 [2024-07-15 15:40:07.993492] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be40) on tqpair=0x1f38a60 00:17:13.091 [2024-07-15 15:40:07.993504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.091 [2024-07-15 15:40:07.993510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.091 [2024-07-15 15:40:07.993514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bfc0) on tqpair=0x1f38a60 00:17:13.091 [2024-07-15 15:40:07.993529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.993556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.091 [2024-07-15 15:40:07.993591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bfc0, cid 5, qid 0 00:17:13.091 [2024-07-15 15:40:07.993651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.091 [2024-07-15 15:40:07.993659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.091 [2024-07-15 15:40:07.993663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bfc0) on tqpair=0x1f38a60 00:17:13.091 [2024-07-15 15:40:07.993679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.993692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.091 [2024-07-15 15:40:07.993712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bfc0, cid 5, qid 0 00:17:13.091 [2024-07-15 15:40:07.993774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.091 [2024-07-15 15:40:07.993782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.091 [2024-07-15 15:40:07.993786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bfc0) on tqpair=0x1f38a60 00:17:13.091 [2024-07-15 15:40:07.993802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.993814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.091 [2024-07-15 15:40:07.993834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bfc0, cid 5, qid 0 00:17:13.091 [2024-07-15 15:40:07.993899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.091 [2024-07-15 15:40:07.993906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.091 [2024-07-15 15:40:07.993910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bfc0) on tqpair=0x1f38a60 00:17:13.091 [2024-07-15 15:40:07.993934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.993949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.091 [2024-07-15 15:40:07.993957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.993968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.091 [2024-07-15 15:40:07.993976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.993980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.993987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.091 [2024-07-15 15:40:07.993998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f38a60) 00:17:13.091 [2024-07-15 15:40:07.994009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.091 [2024-07-15 15:40:07.994032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bfc0, cid 5, qid 0 00:17:13.091 [2024-07-15 15:40:07.994040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7be40, cid 4, qid 0 00:17:13.091 [2024-07-15 15:40:07.994045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c140, cid 6, qid 0 00:17:13.091 [2024-07-15 15:40:07.994051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c2c0, cid 7, qid 0 00:17:13.091 [2024-07-15 15:40:07.994192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:13.091 [2024-07-15 15:40:07.994200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:13.091 [2024-07-15 15:40:07.994204] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994208] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38a60): datao=0, datal=8192, cccid=5 00:17:13.091 [2024-07-15 15:40:07.994213] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7bfc0) on tqpair(0x1f38a60): expected_datao=0, payload_size=8192 00:17:13.091 [2024-07-15 15:40:07.994218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994235] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994240] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:13.091 [2024-07-15 15:40:07.994253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:13.091 [2024-07-15 15:40:07.994257] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994262] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38a60): datao=0, datal=512, cccid=4 00:17:13.091 [2024-07-15 15:40:07.994267] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7be40) on tqpair(0x1f38a60): expected_datao=0, payload_size=512 00:17:13.091 [2024-07-15 15:40:07.994272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994279] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994283] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:13.091 [2024-07-15 15:40:07.994295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:13.091 [2024-07-15 15:40:07.994299] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994303] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38a60): datao=0, datal=512, cccid=6 00:17:13.091 [2024-07-15 15:40:07.994308] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7c140) on tqpair(0x1f38a60): expected_datao=0, payload_size=512 00:17:13.091 [2024-07-15 15:40:07.994313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994319] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994323] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:13.091 [2024-07-15 15:40:07.994329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:13.091 [2024-07-15 15:40:07.994336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:13.091 [2024-07-15 15:40:07.994339] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:13.092 [2024-07-15 15:40:07.994343] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38a60): datao=0, datal=4096, cccid=7 00:17:13.092 [2024-07-15 15:40:07.994348] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7c2c0) on tqpair(0x1f38a60): expected_datao=0, payload_size=4096 00:17:13.092 [2024-07-15 15:40:07.994353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.092 [2024-07-15 15:40:07.994360] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:13.092 [2024-07-15 15:40:07.994364] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:13.092 [2024-07-15 15:40:07.994373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.092 [2024-07-15 15:40:07.994379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.092 [2024-07-15 15:40:07.994383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.092 [2024-07-15 15:40:07.994388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bfc0) on tqpair=0x1f38a60 00:17:13.092 ===================================================== 00:17:13.092 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:13.092 ===================================================== 00:17:13.092 Controller Capabilities/Features 00:17:13.092 ================================ 00:17:13.092 Vendor ID: 8086 00:17:13.092 Subsystem Vendor ID: 8086 00:17:13.092 Serial Number: SPDK00000000000001 00:17:13.092 Model Number: SPDK bdev Controller 00:17:13.092 Firmware Version: 24.09 00:17:13.092 Recommended Arb Burst: 6 00:17:13.092 IEEE OUI Identifier: e4 d2 5c 00:17:13.092 Multi-path I/O 00:17:13.092 May have multiple subsystem ports: Yes 00:17:13.092 May have multiple controllers: Yes 00:17:13.092 Associated with SR-IOV VF: No 00:17:13.092 Max Data Transfer Size: 131072 00:17:13.092 Max Number of Namespaces: 32 00:17:13.092 Max Number of I/O Queues: 127 00:17:13.092 NVMe Specification Version (VS): 1.3 00:17:13.092 NVMe Specification Version (Identify): 1.3 00:17:13.092 Maximum Queue Entries: 128 00:17:13.092 Contiguous Queues Required: Yes 00:17:13.092 Arbitration Mechanisms Supported 00:17:13.092 Weighted Round Robin: Not Supported 00:17:13.092 Vendor Specific: Not Supported 00:17:13.092 Reset Timeout: 15000 ms 00:17:13.092 Doorbell Stride: 4 bytes 00:17:13.092 NVM Subsystem Reset: Not Supported 00:17:13.092 Command Sets Supported 00:17:13.092 NVM Command Set: Supported 00:17:13.092 Boot Partition: Not Supported 00:17:13.092 Memory Page Size Minimum: 4096 bytes 00:17:13.092 Memory Page Size Maximum: 4096 bytes 00:17:13.092 Persistent Memory Region: Not Supported 00:17:13.092 Optional Asynchronous Events Supported 00:17:13.092 Namespace Attribute Notices: Supported 00:17:13.092 Firmware Activation Notices: Not Supported 00:17:13.092 ANA Change Notices: Not Supported 00:17:13.092 PLE Aggregate Log Change Notices: Not Supported 00:17:13.092 LBA Status Info Alert Notices: Not Supported 00:17:13.092 EGE Aggregate Log Change Notices: Not Supported 00:17:13.092 Normal NVM Subsystem Shutdown event: Not Supported 00:17:13.092 Zone Descriptor Change Notices: Not Supported 00:17:13.092 Discovery Log Change Notices: Not Supported 00:17:13.092 Controller Attributes 00:17:13.092 128-bit Host Identifier: Supported 00:17:13.092 Non-Operational Permissive Mode: Not Supported 00:17:13.092 NVM Sets: Not Supported 00:17:13.092 Read Recovery Levels: Not Supported 00:17:13.092 Endurance Groups: Not Supported 00:17:13.092 Predictable Latency Mode: Not Supported 00:17:13.092 Traffic Based Keep ALive: Not Supported 00:17:13.092 Namespace Granularity: Not Supported 00:17:13.092 SQ Associations: Not Supported 00:17:13.092 UUID List: Not Supported 00:17:13.092 Multi-Domain Subsystem: Not Supported 00:17:13.092 Fixed Capacity Management: Not Supported 00:17:13.092 Variable Capacity Management: Not Supported 00:17:13.092 Delete Endurance Group: Not Supported 00:17:13.092 Delete NVM Set: Not Supported 00:17:13.092 Extended LBA Formats Supported: Not Supported 00:17:13.092 Flexible Data Placement Supported: Not Supported 00:17:13.092 00:17:13.092 Controller Memory Buffer Support 00:17:13.092 ================================ 00:17:13.092 Supported: No 00:17:13.092 00:17:13.092 Persistent Memory Region Support 00:17:13.092 ================================ 00:17:13.092 Supported: No 00:17:13.092 00:17:13.092 Admin Command Set Attributes 00:17:13.092 ============================ 00:17:13.092 Security Send/Receive: Not Supported 00:17:13.092 Format NVM: Not Supported 00:17:13.092 Firmware Activate/Download: Not Supported 00:17:13.092 Namespace Management: Not Supported 00:17:13.092 Device Self-Test: Not Supported 00:17:13.092 Directives: Not Supported 00:17:13.092 NVMe-MI: Not Supported 00:17:13.092 Virtualization Management: Not Supported 00:17:13.092 Doorbell Buffer Config: Not Supported 00:17:13.092 Get LBA Status Capability: Not Supported 00:17:13.092 Command & Feature Lockdown Capability: Not Supported 00:17:13.092 Abort Command Limit: 4 00:17:13.092 Async Event Request Limit: 4 00:17:13.092 Number of Firmware Slots: N/A 00:17:13.092 Firmware Slot 1 Read-Only: N/A 00:17:13.092 Firmware Activation Without Reset: [2024-07-15 15:40:07.994405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.092 [2024-07-15 15:40:07.994413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.092 [2024-07-15 15:40:07.994417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.092 [2024-07-15 15:40:07.994422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7be40) on tqpair=0x1f38a60 00:17:13.092 [2024-07-15 15:40:07.994434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.092 [2024-07-15 15:40:07.994441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.092 [2024-07-15 15:40:07.994445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.092 [2024-07-15 15:40:07.994449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c140) on tqpair=0x1f38a60 00:17:13.092 [2024-07-15 15:40:07.994457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.092 [2024-07-15 15:40:07.994463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.092 [2024-07-15 15:40:07.994467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.092 [2024-07-15 15:40:07.994472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c2c0) on tqpair=0x1f38a60 00:17:13.092 N/A 00:17:13.092 Multiple Update Detection Support: N/A 00:17:13.092 Firmware Update Granularity: No Information Provided 00:17:13.092 Per-Namespace SMART Log: No 00:17:13.092 Asymmetric Namespace Access Log Page: Not Supported 00:17:13.092 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:13.092 Command Effects Log Page: Supported 00:17:13.092 Get Log Page Extended Data: Supported 00:17:13.092 Telemetry Log Pages: Not Supported 00:17:13.092 Persistent Event Log Pages: Not Supported 00:17:13.092 Supported Log Pages Log Page: May Support 00:17:13.092 Commands Supported & Effects Log Page: Not Supported 00:17:13.092 Feature Identifiers & Effects Log Page:May Support 00:17:13.092 NVMe-MI Commands & Effects Log Page: May Support 00:17:13.092 Data Area 4 for Telemetry Log: Not Supported 00:17:13.092 Error Log Page Entries Supported: 128 00:17:13.092 Keep Alive: Supported 00:17:13.092 Keep Alive Granularity: 10000 ms 00:17:13.092 00:17:13.092 NVM Command Set Attributes 00:17:13.092 ========================== 00:17:13.092 Submission Queue Entry Size 00:17:13.092 Max: 64 00:17:13.092 Min: 64 00:17:13.092 Completion Queue Entry Size 00:17:13.092 Max: 16 00:17:13.092 Min: 16 00:17:13.092 Number of Namespaces: 32 00:17:13.092 Compare Command: Supported 00:17:13.092 Write Uncorrectable Command: Not Supported 00:17:13.092 Dataset Management Command: Supported 00:17:13.092 Write Zeroes Command: Supported 00:17:13.092 Set Features Save Field: Not Supported 00:17:13.092 Reservations: Supported 00:17:13.092 Timestamp: Not Supported 00:17:13.092 Copy: Supported 00:17:13.092 Volatile Write Cache: Present 00:17:13.092 Atomic Write Unit (Normal): 1 00:17:13.092 Atomic Write Unit (PFail): 1 00:17:13.092 Atomic Compare & Write Unit: 1 00:17:13.092 Fused Compare & Write: Supported 00:17:13.092 Scatter-Gather List 00:17:13.092 SGL Command Set: Supported 00:17:13.092 SGL Keyed: Supported 00:17:13.092 SGL Bit Bucket Descriptor: Not Supported 00:17:13.092 SGL Metadata Pointer: Not Supported 00:17:13.092 Oversized SGL: Not Supported 00:17:13.092 SGL Metadata Address: Not Supported 00:17:13.092 SGL Offset: Supported 00:17:13.092 Transport SGL Data Block: Not Supported 00:17:13.092 Replay Protected Memory Block: Not Supported 00:17:13.092 00:17:13.092 Firmware Slot Information 00:17:13.092 ========================= 00:17:13.092 Active slot: 1 00:17:13.092 Slot 1 Firmware Revision: 24.09 00:17:13.092 00:17:13.092 00:17:13.092 Commands Supported and Effects 00:17:13.092 ============================== 00:17:13.092 Admin Commands 00:17:13.092 -------------- 00:17:13.092 Get Log Page (02h): Supported 00:17:13.092 Identify (06h): Supported 00:17:13.092 Abort (08h): Supported 00:17:13.092 Set Features (09h): Supported 00:17:13.092 Get Features (0Ah): Supported 00:17:13.092 Asynchronous Event Request (0Ch): Supported 00:17:13.092 Keep Alive (18h): Supported 00:17:13.092 I/O Commands 00:17:13.092 ------------ 00:17:13.092 Flush (00h): Supported LBA-Change 00:17:13.092 Write (01h): Supported LBA-Change 00:17:13.092 Read (02h): Supported 00:17:13.092 Compare (05h): Supported 00:17:13.092 Write Zeroes (08h): Supported LBA-Change 00:17:13.092 Dataset Management (09h): Supported LBA-Change 00:17:13.092 Copy (19h): Supported LBA-Change 00:17:13.093 00:17:13.093 Error Log 00:17:13.093 ========= 00:17:13.093 00:17:13.093 Arbitration 00:17:13.093 =========== 00:17:13.093 Arbitration Burst: 1 00:17:13.093 00:17:13.093 Power Management 00:17:13.093 ================ 00:17:13.093 Number of Power States: 1 00:17:13.093 Current Power State: Power State #0 00:17:13.093 Power State #0: 00:17:13.093 Max Power: 0.00 W 00:17:13.093 Non-Operational State: Operational 00:17:13.093 Entry Latency: Not Reported 00:17:13.093 Exit Latency: Not Reported 00:17:13.093 Relative Read Throughput: 0 00:17:13.093 Relative Read Latency: 0 00:17:13.093 Relative Write Throughput: 0 00:17:13.093 Relative Write Latency: 0 00:17:13.093 Idle Power: Not Reported 00:17:13.093 Active Power: Not Reported 00:17:13.093 Non-Operational Permissive Mode: Not Supported 00:17:13.093 00:17:13.093 Health Information 00:17:13.093 ================== 00:17:13.093 Critical Warnings: 00:17:13.093 Available Spare Space: OK 00:17:13.093 Temperature: OK 00:17:13.093 Device Reliability: OK 00:17:13.093 Read Only: No 00:17:13.093 Volatile Memory Backup: OK 00:17:13.093 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:13.093 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:13.093 Available Spare: 0% 00:17:13.093 Available Spare Threshold: 0% 00:17:13.093 Life Percentage Used:[2024-07-15 15:40:07.998618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.998630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f38a60) 00:17:13.093 [2024-07-15 15:40:07.998639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.093 [2024-07-15 15:40:07.998670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7c2c0, cid 7, qid 0 00:17:13.093 [2024-07-15 15:40:07.998766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.093 [2024-07-15 15:40:07.998775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.093 [2024-07-15 15:40:07.998779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.998784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7c2c0) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.998824] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:13.093 [2024-07-15 15:40:07.998836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b840) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.998844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.093 [2024-07-15 15:40:07.998850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b9c0) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.998855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.093 [2024-07-15 15:40:07.998861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bb40) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.998867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.093 [2024-07-15 15:40:07.998872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.998877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.093 [2024-07-15 15:40:07.998887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.998892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.998896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.093 [2024-07-15 15:40:07.998905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.093 [2024-07-15 15:40:07.998931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.093 [2024-07-15 15:40:07.998987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.093 [2024-07-15 15:40:07.998995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.093 [2024-07-15 15:40:07.998999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.999012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.093 [2024-07-15 15:40:07.999029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.093 [2024-07-15 15:40:07.999053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.093 [2024-07-15 15:40:07.999140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.093 [2024-07-15 15:40:07.999147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.093 [2024-07-15 15:40:07.999151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.999161] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:13.093 [2024-07-15 15:40:07.999166] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:13.093 [2024-07-15 15:40:07.999177] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.093 [2024-07-15 15:40:07.999193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.093 [2024-07-15 15:40:07.999212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.093 [2024-07-15 15:40:07.999268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.093 [2024-07-15 15:40:07.999292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.093 [2024-07-15 15:40:07.999297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.999313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.093 [2024-07-15 15:40:07.999330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.093 [2024-07-15 15:40:07.999350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.093 [2024-07-15 15:40:07.999402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.093 [2024-07-15 15:40:07.999409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.093 [2024-07-15 15:40:07.999413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.999428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.093 [2024-07-15 15:40:07.999444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.093 [2024-07-15 15:40:07.999463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.093 [2024-07-15 15:40:07.999515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.093 [2024-07-15 15:40:07.999537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.093 [2024-07-15 15:40:07.999543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.093 [2024-07-15 15:40:07.999559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.093 [2024-07-15 15:40:07.999568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.093 [2024-07-15 15:40:07.999576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.093 [2024-07-15 15:40:07.999598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.093 [2024-07-15 15:40:07.999653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:07.999660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:07.999664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:07.999668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:07.999679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:07.999683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:07.999687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:07.999695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:07.999714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:07.999769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:07.999780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:07.999785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:07.999789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:07.999800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:07.999805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:07.999809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:07.999817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:07.999837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:07.999888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:07.999895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:07.999899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:07.999904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:07.999914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:07.999919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:07.999923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:07.999931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:07.999949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.000001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.000008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.000012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.000027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:08.000043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:08.000062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.000118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.000125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.000129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.000143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:08.000160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:08.000178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.000230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.000237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.000241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.000256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:08.000272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:08.000290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.000343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.000350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.000354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.000370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:08.000386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:08.000405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.000457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.000463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.000467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.000482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:08.000498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:08.000517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.000581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.000589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.000593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.000608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:08.000625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:08.000646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.000698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.000705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.000709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.000724] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:08.000740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:08.000759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.000815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.000827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.000831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.000847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:08.000864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:08.000883] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.000935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.000942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.000946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.000961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.000970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.094 [2024-07-15 15:40:08.000978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.094 [2024-07-15 15:40:08.000996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.094 [2024-07-15 15:40:08.001051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.094 [2024-07-15 15:40:08.001058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.094 [2024-07-15 15:40:08.001062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.001066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.094 [2024-07-15 15:40:08.001077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.001081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.094 [2024-07-15 15:40:08.001086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.001093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.001112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.001172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.001180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.001184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.001199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.001216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.001235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.001283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.001295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.001299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.001315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.001331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.001351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.001404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.001411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.001415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.001429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.001446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.001465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.001519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.001538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.001542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.001558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.001575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.001596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.001654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.001661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.001665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.001680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.001696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.001715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.001768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.001775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.001780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.001795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.001811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.001830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.001882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.001889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.001893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.001908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.001917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.001924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.001943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.001996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.002002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.002006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.002021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.002054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.002073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.002127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.002134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.002138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.002154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.002170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.002190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.002242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.002249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.002253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.002268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.002285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.002305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.002359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.002380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.002384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.002399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.002415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.002435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.002487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.002498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.002503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.002507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.095 [2024-07-15 15:40:08.002519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.006543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:13.095 [2024-07-15 15:40:08.006551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38a60) 00:17:13.095 [2024-07-15 15:40:08.006561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.095 [2024-07-15 15:40:08.006590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7bcc0, cid 3, qid 0 00:17:13.095 [2024-07-15 15:40:08.006654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:13.095 [2024-07-15 15:40:08.006662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:13.095 [2024-07-15 15:40:08.006666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:13.096 [2024-07-15 15:40:08.006671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7bcc0) on tqpair=0x1f38a60 00:17:13.096 [2024-07-15 15:40:08.006680] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:13.096 0% 00:17:13.096 Data Units Read: 0 00:17:13.096 Data Units Written: 0 00:17:13.096 Host Read Commands: 0 00:17:13.096 Host Write Commands: 0 00:17:13.096 Controller Busy Time: 0 minutes 00:17:13.096 Power Cycles: 0 00:17:13.096 Power On Hours: 0 hours 00:17:13.096 Unsafe Shutdowns: 0 00:17:13.096 Unrecoverable Media Errors: 0 00:17:13.096 Lifetime Error Log Entries: 0 00:17:13.096 Warning Temperature Time: 0 minutes 00:17:13.096 Critical Temperature Time: 0 minutes 00:17:13.096 00:17:13.096 Number of Queues 00:17:13.096 ================ 00:17:13.096 Number of I/O Submission Queues: 127 00:17:13.096 Number of I/O Completion Queues: 127 00:17:13.096 00:17:13.096 Active Namespaces 00:17:13.096 ================= 00:17:13.096 Namespace ID:1 00:17:13.096 Error Recovery Timeout: Unlimited 00:17:13.096 Command Set Identifier: NVM (00h) 00:17:13.096 Deallocate: Supported 00:17:13.096 Deallocated/Unwritten Error: Not Supported 00:17:13.096 Deallocated Read Value: Unknown 00:17:13.096 Deallocate in Write Zeroes: Not Supported 00:17:13.096 Deallocated Guard Field: 0xFFFF 00:17:13.096 Flush: Supported 00:17:13.096 Reservation: Supported 00:17:13.096 Namespace Sharing Capabilities: Multiple Controllers 00:17:13.096 Size (in LBAs): 131072 (0GiB) 00:17:13.096 Capacity (in LBAs): 131072 (0GiB) 00:17:13.096 Utilization (in LBAs): 131072 (0GiB) 00:17:13.096 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:13.096 EUI64: ABCDEF0123456789 00:17:13.096 UUID: 2975d394-d5b5-4f39-84d6-1ccd0061b447 00:17:13.096 Thin Provisioning: Not Supported 00:17:13.096 Per-NS Atomic Units: Yes 00:17:13.096 Atomic Boundary Size (Normal): 0 00:17:13.096 Atomic Boundary Size (PFail): 0 00:17:13.096 Atomic Boundary Offset: 0 00:17:13.096 Maximum Single Source Range Length: 65535 00:17:13.096 Maximum Copy Length: 65535 00:17:13.096 Maximum Source Range Count: 1 00:17:13.096 NGUID/EUI64 Never Reused: No 00:17:13.096 Namespace Write Protected: No 00:17:13.096 Number of LBA Formats: 1 00:17:13.096 Current LBA Format: LBA Format #00 00:17:13.096 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:13.096 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.096 rmmod nvme_tcp 00:17:13.096 rmmod nvme_fabrics 00:17:13.096 rmmod nvme_keyring 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86369 ']' 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86369 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86369 ']' 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86369 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86369 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86369' 00:17:13.096 killing process with pid 86369 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86369 00:17:13.096 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86369 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:13.389 00:17:13.389 real 0m2.511s 00:17:13.389 user 0m7.215s 00:17:13.389 sys 0m0.596s 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.389 ************************************ 00:17:13.389 15:40:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:13.389 END TEST nvmf_identify 00:17:13.389 ************************************ 00:17:13.389 15:40:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:13.389 15:40:08 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:13.389 15:40:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:13.389 15:40:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.389 15:40:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:13.389 ************************************ 00:17:13.389 START TEST nvmf_perf 00:17:13.389 ************************************ 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:13.389 * Looking for test storage... 00:17:13.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.389 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.390 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:13.649 Cannot find device "nvmf_tgt_br" 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.649 Cannot find device "nvmf_tgt_br2" 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:13.649 Cannot find device "nvmf_tgt_br" 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:13.649 Cannot find device "nvmf_tgt_br2" 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:13.649 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:13.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:17:13.908 00:17:13.908 --- 10.0.0.2 ping statistics --- 00:17:13.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.908 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:13.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:13.908 00:17:13.908 --- 10.0.0.3 ping statistics --- 00:17:13.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.908 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:13.908 00:17:13.908 --- 10.0.0.1 ping statistics --- 00:17:13.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.908 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.908 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86589 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86589 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 86589 ']' 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.909 15:40:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:13.909 [2024-07-15 15:40:08.936408] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:13.909 [2024-07-15 15:40:08.936503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.168 [2024-07-15 15:40:09.069036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.168 [2024-07-15 15:40:09.127983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.168 [2024-07-15 15:40:09.128037] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.168 [2024-07-15 15:40:09.128071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.168 [2024-07-15 15:40:09.128082] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.168 [2024-07-15 15:40:09.128092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.168 [2024-07-15 15:40:09.128855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.168 [2024-07-15 15:40:09.128973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.168 [2024-07-15 15:40:09.129074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.168 [2024-07-15 15:40:09.129121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.168 15:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.168 15:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:17:14.168 15:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:14.168 15:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.168 15:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:14.168 15:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.168 15:40:09 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:14.168 15:40:09 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:14.735 15:40:09 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:14.735 15:40:09 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:14.993 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:14.993 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:15.251 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:15.251 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:15.251 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:15.251 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:15.251 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.510 [2024-07-15 15:40:10.524180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.510 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:15.769 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:15.769 15:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:16.028 15:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:16.028 15:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:16.287 15:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.546 [2024-07-15 15:40:11.545413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.546 15:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:16.805 15:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:16.805 15:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:16.805 15:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:16.805 15:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:17.742 Initializing NVMe Controllers 00:17:17.742 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:17.742 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:17.742 Initialization complete. Launching workers. 00:17:17.742 ======================================================== 00:17:17.742 Latency(us) 00:17:17.742 Device Information : IOPS MiB/s Average min max 00:17:17.743 PCIE (0000:00:10.0) NSID 1 from core 0: 23267.67 90.89 1375.35 389.34 7574.88 00:17:17.743 ======================================================== 00:17:17.743 Total : 23267.67 90.89 1375.35 389.34 7574.88 00:17:17.743 00:17:18.002 15:40:12 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:19.389 Initializing NVMe Controllers 00:17:19.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:19.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:19.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:19.389 Initialization complete. Launching workers. 00:17:19.389 ======================================================== 00:17:19.389 Latency(us) 00:17:19.389 Device Information : IOPS MiB/s Average min max 00:17:19.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3682.95 14.39 271.19 104.12 7074.54 00:17:19.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8195.04 6912.76 12041.34 00:17:19.389 ======================================================== 00:17:19.389 Total : 3805.95 14.87 527.27 104.12 12041.34 00:17:19.389 00:17:19.389 15:40:14 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:20.762 Initializing NVMe Controllers 00:17:20.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:20.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:20.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:20.762 Initialization complete. Launching workers. 00:17:20.762 ======================================================== 00:17:20.762 Latency(us) 00:17:20.762 Device Information : IOPS MiB/s Average min max 00:17:20.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9404.99 36.74 3403.79 556.56 7861.76 00:17:20.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2702.00 10.55 11942.84 7256.36 22899.73 00:17:20.762 ======================================================== 00:17:20.762 Total : 12106.99 47.29 5309.51 556.56 22899.73 00:17:20.762 00:17:20.762 15:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:20.762 15:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:23.294 Initializing NVMe Controllers 00:17:23.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:23.294 Controller IO queue size 128, less than required. 00:17:23.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:23.294 Controller IO queue size 128, less than required. 00:17:23.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:23.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:23.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:23.294 Initialization complete. Launching workers. 00:17:23.294 ======================================================== 00:17:23.294 Latency(us) 00:17:23.294 Device Information : IOPS MiB/s Average min max 00:17:23.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1854.46 463.61 69837.20 46020.60 126764.62 00:17:23.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 383.99 96.00 355565.48 103726.49 729227.33 00:17:23.294 ======================================================== 00:17:23.294 Total : 2238.45 559.61 118852.01 46020.60 729227.33 00:17:23.294 00:17:23.294 15:40:18 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:23.294 Initializing NVMe Controllers 00:17:23.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:23.294 Controller IO queue size 128, less than required. 00:17:23.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:23.294 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:23.294 Controller IO queue size 128, less than required. 00:17:23.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:23.294 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:23.294 WARNING: Some requested NVMe devices were skipped 00:17:23.294 No valid NVMe controllers or AIO or URING devices found 00:17:23.294 15:40:18 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:25.830 Initializing NVMe Controllers 00:17:25.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.830 Controller IO queue size 128, less than required. 00:17:25.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:25.830 Controller IO queue size 128, less than required. 00:17:25.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:25.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:25.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:25.830 Initialization complete. Launching workers. 00:17:25.830 00:17:25.830 ==================== 00:17:25.830 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:25.830 TCP transport: 00:17:25.830 polls: 9044 00:17:25.830 idle_polls: 4767 00:17:25.830 sock_completions: 4277 00:17:25.830 nvme_completions: 4901 00:17:25.830 submitted_requests: 7400 00:17:25.830 queued_requests: 1 00:17:25.830 00:17:25.830 ==================== 00:17:25.830 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:25.830 TCP transport: 00:17:25.830 polls: 9146 00:17:25.830 idle_polls: 5513 00:17:25.830 sock_completions: 3633 00:17:25.830 nvme_completions: 6863 00:17:25.830 submitted_requests: 10246 00:17:25.830 queued_requests: 1 00:17:25.830 ======================================================== 00:17:25.830 Latency(us) 00:17:25.830 Device Information : IOPS MiB/s Average min max 00:17:25.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1224.98 306.24 106755.88 65303.67 163396.00 00:17:25.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1715.47 428.87 75896.39 38322.20 128227.84 00:17:25.830 ======================================================== 00:17:25.830 Total : 2940.45 735.11 88752.33 38322.20 163396.00 00:17:25.830 00:17:25.830 15:40:20 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:25.830 15:40:20 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.088 15:40:21 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:26.088 15:40:21 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:26.088 15:40:21 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:26.088 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.088 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:17:26.088 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.088 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:17:26.088 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.088 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.347 rmmod nvme_tcp 00:17:26.347 rmmod nvme_fabrics 00:17:26.347 rmmod nvme_keyring 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86589 ']' 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86589 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 86589 ']' 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 86589 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86589 00:17:26.347 killing process with pid 86589 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86589' 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 86589 00:17:26.347 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 86589 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:26.916 ************************************ 00:17:26.916 END TEST nvmf_perf 00:17:26.916 ************************************ 00:17:26.916 00:17:26.916 real 0m13.495s 00:17:26.916 user 0m49.660s 00:17:26.916 sys 0m3.416s 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:26.916 15:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:26.916 15:40:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:26.916 15:40:21 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:26.916 15:40:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:26.916 15:40:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.916 15:40:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.916 ************************************ 00:17:26.916 START TEST nvmf_fio_host 00:17:26.916 ************************************ 00:17:26.916 15:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:26.916 * Looking for test storage... 00:17:26.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.916 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:26.917 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:27.175 Cannot find device "nvmf_tgt_br" 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.175 Cannot find device "nvmf_tgt_br2" 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:27.175 Cannot find device "nvmf_tgt_br" 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:27.175 Cannot find device "nvmf_tgt_br2" 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.175 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:27.176 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:27.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:27.435 00:17:27.435 --- 10.0.0.2 ping statistics --- 00:17:27.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.435 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:27.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:17:27.435 00:17:27.435 --- 10.0.0.3 ping statistics --- 00:17:27.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.435 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:27.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:17:27.435 00:17:27.435 --- 10.0.0.1 ping statistics --- 00:17:27.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.435 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87056 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87056 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87056 ']' 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.435 15:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.435 [2024-07-15 15:40:22.448382] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:27.435 [2024-07-15 15:40:22.448473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.694 [2024-07-15 15:40:22.587681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.694 [2024-07-15 15:40:22.636743] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.694 [2024-07-15 15:40:22.636797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.694 [2024-07-15 15:40:22.636807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.694 [2024-07-15 15:40:22.636813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.694 [2024-07-15 15:40:22.636818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.694 [2024-07-15 15:40:22.636957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.694 [2024-07-15 15:40:22.637274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.694 [2024-07-15 15:40:22.637668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.694 [2024-07-15 15:40:22.637785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.260 15:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.260 15:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:17:28.260 15:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:28.517 [2024-07-15 15:40:23.599793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.517 15:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:28.517 15:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:28.517 15:40:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.775 15:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:29.033 Malloc1 00:17:29.033 15:40:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:29.291 15:40:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:29.291 15:40:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.548 [2024-07-15 15:40:24.593443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.548 15:40:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:29.806 15:40:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:30.063 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:30.063 fio-3.35 00:17:30.063 Starting 1 thread 00:17:32.587 00:17:32.587 test: (groupid=0, jobs=1): err= 0: pid=87182: Mon Jul 15 15:40:27 2024 00:17:32.587 read: IOPS=9618, BW=37.6MiB/s (39.4MB/s)(75.4MiB/2006msec) 00:17:32.587 slat (nsec): min=1796, max=326697, avg=2320.17, stdev=3408.25 00:17:32.587 clat (usec): min=3201, max=12296, avg=6940.79, stdev=544.72 00:17:32.587 lat (usec): min=3243, max=12298, avg=6943.11, stdev=544.61 00:17:32.587 clat percentiles (usec): 00:17:32.587 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:17:32.587 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:17:32.587 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7832], 00:17:32.587 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10290], 99.95th=[11469], 00:17:32.587 | 99.99th=[12256] 00:17:32.587 bw ( KiB/s): min=36760, max=39808, per=99.92%, avg=38440.00, stdev=1383.84, samples=4 00:17:32.587 iops : min= 9190, max= 9952, avg=9610.00, stdev=345.96, samples=4 00:17:32.587 write: IOPS=9615, BW=37.6MiB/s (39.4MB/s)(75.3MiB/2006msec); 0 zone resets 00:17:32.587 slat (nsec): min=1860, max=269441, avg=2428.15, stdev=2439.54 00:17:32.587 clat (usec): min=2461, max=12161, avg=6317.33, stdev=494.81 00:17:32.587 lat (usec): min=2476, max=12163, avg=6319.76, stdev=494.77 00:17:32.587 clat percentiles (usec): 00:17:32.587 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5932], 00:17:32.587 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6390], 00:17:32.587 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7111], 00:17:32.587 | 99.00th=[ 7701], 99.50th=[ 8029], 99.90th=[ 9372], 99.95th=[11207], 00:17:32.587 | 99.99th=[12125] 00:17:32.587 bw ( KiB/s): min=37272, max=39552, per=100.00%, avg=38472.00, stdev=934.67, samples=4 00:17:32.587 iops : min= 9318, max= 9888, avg=9618.00, stdev=233.67, samples=4 00:17:32.587 lat (msec) : 4=0.08%, 10=99.81%, 20=0.11% 00:17:32.587 cpu : usr=69.48%, sys=21.50%, ctx=5, majf=0, minf=7 00:17:32.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:32.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:32.587 issued rwts: total=19294,19289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:32.587 00:17:32.587 Run status group 0 (all jobs): 00:17:32.587 READ: bw=37.6MiB/s (39.4MB/s), 37.6MiB/s-37.6MiB/s (39.4MB/s-39.4MB/s), io=75.4MiB (79.0MB), run=2006-2006msec 00:17:32.587 WRITE: bw=37.6MiB/s (39.4MB/s), 37.6MiB/s-37.6MiB/s (39.4MB/s-39.4MB/s), io=75.3MiB (79.0MB), run=2006-2006msec 00:17:32.587 15:40:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:32.587 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:32.588 15:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:32.588 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:32.588 fio-3.35 00:17:32.588 Starting 1 thread 00:17:35.117 00:17:35.117 test: (groupid=0, jobs=1): err= 0: pid=87225: Mon Jul 15 15:40:29 2024 00:17:35.117 read: IOPS=8568, BW=134MiB/s (140MB/s)(269MiB/2006msec) 00:17:35.117 slat (usec): min=2, max=111, avg= 3.73, stdev= 2.34 00:17:35.117 clat (usec): min=2161, max=16613, avg=8846.04, stdev=2085.69 00:17:35.117 lat (usec): min=2164, max=16616, avg=8849.78, stdev=2085.70 00:17:35.117 clat percentiles (usec): 00:17:35.117 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 6980], 00:17:35.117 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9503], 00:17:35.117 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11338], 95.00th=[12125], 00:17:35.117 | 99.00th=[14222], 99.50th=[14877], 99.90th=[16188], 99.95th=[16319], 00:17:35.117 | 99.99th=[16581] 00:17:35.117 bw ( KiB/s): min=60320, max=81312, per=50.95%, avg=69848.00, stdev=10825.41, samples=4 00:17:35.117 iops : min= 3770, max= 5082, avg=4365.50, stdev=676.59, samples=4 00:17:35.117 write: IOPS=5133, BW=80.2MiB/s (84.1MB/s)(143MiB/1784msec); 0 zone resets 00:17:35.117 slat (usec): min=32, max=330, avg=37.07, stdev= 8.95 00:17:35.117 clat (usec): min=4661, max=16962, avg=10721.61, stdev=1758.70 00:17:35.117 lat (usec): min=4694, max=16996, avg=10758.68, stdev=1758.56 00:17:35.117 clat percentiles (usec): 00:17:35.117 | 1.00th=[ 7242], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9241], 00:17:35.117 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[10945], 00:17:35.117 | 70.00th=[11469], 80.00th=[12125], 90.00th=[13042], 95.00th=[13829], 00:17:35.117 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16581], 99.95th=[16712], 00:17:35.117 | 99.99th=[16909] 00:17:35.117 bw ( KiB/s): min=62880, max=84992, per=88.63%, avg=72792.00, stdev=11328.04, samples=4 00:17:35.117 iops : min= 3930, max= 5312, avg=4549.50, stdev=708.00, samples=4 00:17:35.117 lat (msec) : 4=0.16%, 10=57.08%, 20=42.76% 00:17:35.118 cpu : usr=73.08%, sys=18.05%, ctx=4, majf=0, minf=24 00:17:35.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:35.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.118 issued rwts: total=17189,9158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.118 00:17:35.118 Run status group 0 (all jobs): 00:17:35.118 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=269MiB (282MB), run=2006-2006msec 00:17:35.118 WRITE: bw=80.2MiB/s (84.1MB/s), 80.2MiB/s-80.2MiB/s (84.1MB/s-84.1MB/s), io=143MiB (150MB), run=1784-1784msec 00:17:35.118 15:40:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.118 rmmod nvme_tcp 00:17:35.118 rmmod nvme_fabrics 00:17:35.118 rmmod nvme_keyring 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87056 ']' 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87056 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87056 ']' 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87056 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87056 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:35.118 killing process with pid 87056 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87056' 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87056 00:17:35.118 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87056 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:35.377 00:17:35.377 real 0m8.433s 00:17:35.377 user 0m34.865s 00:17:35.377 sys 0m2.136s 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.377 ************************************ 00:17:35.377 END TEST nvmf_fio_host 00:17:35.377 ************************************ 00:17:35.377 15:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.377 15:40:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:35.377 15:40:30 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:35.377 15:40:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:35.377 15:40:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.377 15:40:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.377 ************************************ 00:17:35.377 START TEST nvmf_failover 00:17:35.377 ************************************ 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:35.377 * Looking for test storage... 00:17:35.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.377 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.637 15:40:30 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.637 15:40:30 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.637 15:40:30 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.637 15:40:30 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:35.638 Cannot find device "nvmf_tgt_br" 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.638 Cannot find device "nvmf_tgt_br2" 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:35.638 Cannot find device "nvmf_tgt_br" 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:35.638 Cannot find device "nvmf_tgt_br2" 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:35.638 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:35.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:35.902 00:17:35.902 --- 10.0.0.2 ping statistics --- 00:17:35.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.902 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:35.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:35.902 00:17:35.902 --- 10.0.0.3 ping statistics --- 00:17:35.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.902 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:35.902 00:17:35.902 --- 10.0.0.1 ping statistics --- 00:17:35.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.902 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87442 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87442 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87442 ']' 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.902 15:40:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:35.902 [2024-07-15 15:40:30.909333] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:35.902 [2024-07-15 15:40:30.909434] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.161 [2024-07-15 15:40:31.049067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:36.161 [2024-07-15 15:40:31.100117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.161 [2024-07-15 15:40:31.100181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.161 [2024-07-15 15:40:31.100191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.161 [2024-07-15 15:40:31.100198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.161 [2024-07-15 15:40:31.100204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.161 [2024-07-15 15:40:31.100348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.161 [2024-07-15 15:40:31.101085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.161 [2024-07-15 15:40:31.101144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.729 15:40:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.729 15:40:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:36.729 15:40:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.729 15:40:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.729 15:40:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:36.988 15:40:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.988 15:40:31 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:37.329 [2024-07-15 15:40:32.142014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.329 15:40:32 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:37.329 Malloc0 00:17:37.587 15:40:32 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.587 15:40:32 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.846 15:40:32 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.104 [2024-07-15 15:40:33.110024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.104 15:40:33 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:38.363 [2024-07-15 15:40:33.326173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:38.363 15:40:33 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:38.622 [2024-07-15 15:40:33.542344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87554 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87554 /var/tmp/bdevperf.sock 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87554 ']' 00:17:38.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.622 15:40:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:38.880 15:40:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.880 15:40:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:38.880 15:40:33 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:39.139 NVMe0n1 00:17:39.139 15:40:34 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:39.398 00:17:39.398 15:40:34 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:39.398 15:40:34 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87588 00:17:39.398 15:40:34 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:40.777 15:40:35 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.777 [2024-07-15 15:40:35.775790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.775998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.777 [2024-07-15 15:40:35.776128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 [2024-07-15 15:40:35.776405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86f80 is same with the state(5) to be set 00:17:40.778 15:40:35 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:44.060 15:40:38 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:44.060 00:17:44.060 15:40:39 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:44.320 [2024-07-15 15:40:39.345029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.320 [2024-07-15 15:40:39.345261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.345991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.321 [2024-07-15 15:40:39.346070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 [2024-07-15 15:40:39.346171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88340 is same with the state(5) to be set 00:17:44.322 15:40:39 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:47.606 15:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.607 [2024-07-15 15:40:42.613365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.607 15:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:48.541 15:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:48.798 [2024-07-15 15:40:43.897712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.798 [2024-07-15 15:40:43.897907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.897995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 [2024-07-15 15:40:43.898078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a20 is same with the state(5) to be set 00:17:48.799 15:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 87588 00:17:55.366 0 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87554 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87554 ']' 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87554 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87554 00:17:55.366 killing process with pid 87554 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87554' 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87554 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87554 00:17:55.366 15:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:55.366 [2024-07-15 15:40:33.604761] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:55.366 [2024-07-15 15:40:33.604867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87554 ] 00:17:55.366 [2024-07-15 15:40:33.741987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.366 [2024-07-15 15:40:33.812085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.366 Running I/O for 15 seconds... 00:17:55.366 [2024-07-15 15:40:35.778286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.366 [2024-07-15 15:40:35.778622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.366 [2024-07-15 15:40:35.778638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.366 [2024-07-15 15:40:35.778653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.778729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.778759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.778788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.778817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.778846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.778879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.778910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.367 [2024-07-15 15:40:35.778939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.367 [2024-07-15 15:40:35.778968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.778998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.367 [2024-07-15 15:40:35.779011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.367 [2024-07-15 15:40:35.779038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.367 [2024-07-15 15:40:35.779080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.367 [2024-07-15 15:40:35.779115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.367 [2024-07-15 15:40:35.779143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.779981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.779994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.780008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.780020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.367 [2024-07-15 15:40:35.780034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.367 [2024-07-15 15:40:35.780047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.368 [2024-07-15 15:40:35.780324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.780974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.780989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.368 [2024-07-15 15:40:35.781331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.368 [2024-07-15 15:40:35.781361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.368 [2024-07-15 15:40:35.781378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93624 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93632 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93640 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93648 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93656 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93664 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93672 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93680 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93688 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93696 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93704 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.781965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.781978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.781987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.781997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93712 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93720 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93728 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93736 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93744 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93752 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93760 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93768 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93776 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93784 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93792 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93800 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93808 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93816 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.369 [2024-07-15 15:40:35.782685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.369 [2024-07-15 15:40:35.782697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93824 len:8 PRP1 0x0 PRP2 0x0 00:17:55.369 [2024-07-15 15:40:35.782721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.369 [2024-07-15 15:40:35.782736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.370 [2024-07-15 15:40:35.782746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.370 [2024-07-15 15:40:35.782756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93832 len:8 PRP1 0x0 PRP2 0x0 00:17:55.370 [2024-07-15 15:40:35.782770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.782784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.370 [2024-07-15 15:40:35.782793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.370 [2024-07-15 15:40:35.782804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93840 len:8 PRP1 0x0 PRP2 0x0 00:17:55.370 [2024-07-15 15:40:35.782817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.782831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.370 [2024-07-15 15:40:35.782841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.370 [2024-07-15 15:40:35.782851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93848 len:8 PRP1 0x0 PRP2 0x0 00:17:55.370 [2024-07-15 15:40:35.782864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.782878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.370 [2024-07-15 15:40:35.782888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.370 [2024-07-15 15:40:35.782898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93856 len:8 PRP1 0x0 PRP2 0x0 00:17:55.370 [2024-07-15 15:40:35.782911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.782925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.370 [2024-07-15 15:40:35.782935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.370 [2024-07-15 15:40:35.782947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93864 len:8 PRP1 0x0 PRP2 0x0 00:17:55.370 [2024-07-15 15:40:35.782961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.782974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.370 [2024-07-15 15:40:35.782999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.370 [2024-07-15 15:40:35.783025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93872 len:8 PRP1 0x0 PRP2 0x0 00:17:55.370 [2024-07-15 15:40:35.783037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.783096] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e5c90 was disconnected and freed. reset controller. 00:17:55.370 [2024-07-15 15:40:35.783113] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:55.370 [2024-07-15 15:40:35.783168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.370 [2024-07-15 15:40:35.783188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.783212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.370 [2024-07-15 15:40:35.783225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.783238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.370 [2024-07-15 15:40:35.783250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.783264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.370 [2024-07-15 15:40:35.783276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:35.783288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.370 [2024-07-15 15:40:35.783333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1269e30 (9): Bad file descriptor 00:17:55.370 [2024-07-15 15:40:35.787050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.370 [2024-07-15 15:40:35.818517] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:55.370 [2024-07-15 15:40:39.346458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.346966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.346979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.347025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.347053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.347067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.347079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.347094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.347107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.370 [2024-07-15 15:40:39.347121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.370 [2024-07-15 15:40:39.347134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.347942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.347987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.371 [2024-07-15 15:40:39.348431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.371 [2024-07-15 15:40:39.348444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.372 [2024-07-15 15:40:39.348470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.372 [2024-07-15 15:40:39.348497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.372 [2024-07-15 15:40:39.348524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.348976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.348989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.372 [2024-07-15 15:40:39.349777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.372 [2024-07-15 15:40:39.349792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.372 [2024-07-15 15:40:39.349806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.349822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.349836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.349852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.349865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.349881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.349909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.349925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.349938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.349953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.349973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.373 [2024-07-15 15:40:39.350602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e7d90 is same with the state(5) to be set 00:17:55.373 [2024-07-15 15:40:39.350634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.373 [2024-07-15 15:40:39.350645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.373 [2024-07-15 15:40:39.350656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122704 len:8 PRP1 0x0 PRP2 0x0 00:17:55.373 [2024-07-15 15:40:39.350669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350726] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e7d90 was disconnected and freed. reset controller. 00:17:55.373 [2024-07-15 15:40:39.350747] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:55.373 [2024-07-15 15:40:39.350804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.373 [2024-07-15 15:40:39.350826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.373 [2024-07-15 15:40:39.350859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.373 [2024-07-15 15:40:39.350887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.373 [2024-07-15 15:40:39.350925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:39.350940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.373 [2024-07-15 15:40:39.354863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.373 [2024-07-15 15:40:39.354904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1269e30 (9): Bad file descriptor 00:17:55.373 [2024-07-15 15:40:39.386917] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:55.373 [2024-07-15 15:40:43.899306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.373 [2024-07-15 15:40:43.899733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.373 [2024-07-15 15:40:43.899749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.899762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.899810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.899823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.899838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.899852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.899867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.899880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.899895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.899909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.899924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.899953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.899968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.899981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.899995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.374 [2024-07-15 15:40:43.900864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.374 [2024-07-15 15:40:43.900885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.900901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.900914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.900928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.900941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.900955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.900967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.900982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.900995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.901978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.901997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.902012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.902026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.902040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.375 [2024-07-15 15:40:43.902052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.902067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.375 [2024-07-15 15:40:43.902079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.902093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.375 [2024-07-15 15:40:43.902106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.375 [2024-07-15 15:40:43.902120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.375 [2024-07-15 15:40:43.902132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.902983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.902998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.903011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.903054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.903102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.903135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.903164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.376 [2024-07-15 15:40:43.903191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.376 [2024-07-15 15:40:43.903232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.376 [2024-07-15 15:40:43.903243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100320 len:8 PRP1 0x0 PRP2 0x0 00:17:55.376 [2024-07-15 15:40:43.903256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903300] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e7b80 was disconnected and freed. reset controller. 00:17:55.376 [2024-07-15 15:40:43.903316] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:55.376 [2024-07-15 15:40:43.903367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.376 [2024-07-15 15:40:43.903388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.376 [2024-07-15 15:40:43.903414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.376 [2024-07-15 15:40:43.903442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.376 [2024-07-15 15:40:43.903467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.376 [2024-07-15 15:40:43.903479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.376 [2024-07-15 15:40:43.903524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1269e30 (9): Bad file descriptor 00:17:55.376 [2024-07-15 15:40:43.907335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.377 [2024-07-15 15:40:43.940629] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:55.377 00:17:55.377 Latency(us) 00:17:55.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.377 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:55.377 Verification LBA range: start 0x0 length 0x4000 00:17:55.377 NVMe0n1 : 15.00 9948.52 38.86 196.03 0.00 12589.43 815.48 16324.42 00:17:55.377 =================================================================================================================== 00:17:55.377 Total : 9948.52 38.86 196.03 0.00 12589.43 815.48 16324.42 00:17:55.377 Received shutdown signal, test time was about 15.000000 seconds 00:17:55.377 00:17:55.377 Latency(us) 00:17:55.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.377 =================================================================================================================== 00:17:55.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:55.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87792 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87792 /var/tmp/bdevperf.sock 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87792 ']' 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.377 15:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:55.942 15:40:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.942 15:40:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:55.942 15:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:55.942 [2024-07-15 15:40:51.046158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:55.942 15:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:56.199 [2024-07-15 15:40:51.254263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:56.199 15:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:56.456 NVMe0n1 00:17:56.456 15:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:56.712 00:17:56.712 15:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:56.969 00:17:57.226 15:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:57.226 15:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:57.483 15:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.739 15:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:01.017 15:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:01.017 15:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:01.017 15:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=87929 00:18:01.017 15:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.017 15:40:55 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 87929 00:18:01.952 0 00:18:01.952 15:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:01.952 [2024-07-15 15:40:49.877356] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:01.952 [2024-07-15 15:40:49.877471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87792 ] 00:18:01.952 [2024-07-15 15:40:50.014042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.952 [2024-07-15 15:40:50.070441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.952 [2024-07-15 15:40:52.609487] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:01.952 [2024-07-15 15:40:52.609643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.952 [2024-07-15 15:40:52.609670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.952 [2024-07-15 15:40:52.609689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.952 [2024-07-15 15:40:52.609703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.952 [2024-07-15 15:40:52.609717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.952 [2024-07-15 15:40:52.609730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.952 [2024-07-15 15:40:52.609744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.952 [2024-07-15 15:40:52.609758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.952 [2024-07-15 15:40:52.609771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:01.952 [2024-07-15 15:40:52.609810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:01.952 [2024-07-15 15:40:52.609840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd4e30 (9): Bad file descriptor 00:18:01.952 [2024-07-15 15:40:52.613841] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:01.952 Running I/O for 1 seconds... 00:18:01.952 00:18:01.952 Latency(us) 00:18:01.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.952 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:01.952 Verification LBA range: start 0x0 length 0x4000 00:18:01.952 NVMe0n1 : 1.01 9105.30 35.57 0.00 0.00 13970.51 1578.82 15252.01 00:18:01.952 =================================================================================================================== 00:18:01.952 Total : 9105.30 35.57 0.00 0.00 13970.51 1578.82 15252.01 00:18:01.952 15:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:01.952 15:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:02.220 15:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:02.492 15:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:02.492 15:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:02.751 15:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:03.010 15:40:58 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:06.294 15:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:06.294 15:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:06.294 15:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 87792 00:18:06.294 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87792 ']' 00:18:06.295 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87792 00:18:06.295 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:06.295 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.295 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87792 00:18:06.295 killing process with pid 87792 00:18:06.295 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:06.295 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:06.295 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87792' 00:18:06.295 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87792 00:18:06.295 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87792 00:18:06.553 15:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:06.553 15:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.812 15:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:06.812 15:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:06.812 15:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:06.812 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.812 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:06.812 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.812 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:06.812 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.812 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.812 rmmod nvme_tcp 00:18:06.812 rmmod nvme_fabrics 00:18:06.812 rmmod nvme_keyring 00:18:07.070 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:07.070 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:07.070 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:07.070 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87442 ']' 00:18:07.070 15:41:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87442 00:18:07.070 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87442 ']' 00:18:07.070 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87442 00:18:07.070 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:07.071 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.071 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87442 00:18:07.071 killing process with pid 87442 00:18:07.071 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:07.071 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:07.071 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87442' 00:18:07.071 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87442 00:18:07.071 15:41:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87442 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:07.071 00:18:07.071 real 0m31.771s 00:18:07.071 user 2m3.795s 00:18:07.071 sys 0m4.436s 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:07.071 ************************************ 00:18:07.071 15:41:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:07.071 END TEST nvmf_failover 00:18:07.071 ************************************ 00:18:07.330 15:41:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:07.330 15:41:02 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:07.330 15:41:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:07.330 15:41:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.330 15:41:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:07.330 ************************************ 00:18:07.330 START TEST nvmf_host_discovery 00:18:07.330 ************************************ 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:07.330 * Looking for test storage... 00:18:07.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:07.330 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:07.331 Cannot find device "nvmf_tgt_br" 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.331 Cannot find device "nvmf_tgt_br2" 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:07.331 Cannot find device "nvmf_tgt_br" 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:07.331 Cannot find device "nvmf_tgt_br2" 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:07.331 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.589 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:07.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:07.590 00:18:07.590 --- 10.0.0.2 ping statistics --- 00:18:07.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.590 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:07.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:07.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:07.590 00:18:07.590 --- 10.0.0.3 ping statistics --- 00:18:07.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.590 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:07.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:07.590 00:18:07.590 --- 10.0.0.1 ping statistics --- 00:18:07.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.590 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88238 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88238 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88238 ']' 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.590 15:41:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.849 [2024-07-15 15:41:02.731733] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:07.849 [2024-07-15 15:41:02.731810] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.849 [2024-07-15 15:41:02.868539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.849 [2024-07-15 15:41:02.924003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.849 [2024-07-15 15:41:02.924074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.849 [2024-07-15 15:41:02.924101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.849 [2024-07-15 15:41:02.924109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.849 [2024-07-15 15:41:02.924116] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.849 [2024-07-15 15:41:02.924160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.108 [2024-07-15 15:41:03.056727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.108 [2024-07-15 15:41:03.064801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.108 null0 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.108 null1 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.108 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88269 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:08.108 15:41:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88269 /tmp/host.sock 00:18:08.109 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88269 ']' 00:18:08.109 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:08.109 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.109 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:08.109 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.109 15:41:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.109 [2024-07-15 15:41:03.156917] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:08.109 [2024-07-15 15:41:03.157067] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88269 ] 00:18:08.367 [2024-07-15 15:41:03.296765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.367 [2024-07-15 15:41:03.369241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.306 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.306 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:09.306 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:09.307 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 [2024-07-15 15:41:04.526243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:09.566 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:18:09.825 15:41:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:10.084 [2024-07-15 15:41:05.159977] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:10.084 [2024-07-15 15:41:05.160031] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:10.084 [2024-07-15 15:41:05.160065] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:10.343 [2024-07-15 15:41:05.246199] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:10.343 [2024-07-15 15:41:05.303087] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:10.343 [2024-07-15 15:41:05.303150] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:10.912 15:41:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.912 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:10.912 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:10.912 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:10.912 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:10.912 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:10.912 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.912 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:10.912 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.912 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:10.913 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.173 [2024-07-15 15:41:06.142965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:11.173 [2024-07-15 15:41:06.143340] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:11.173 [2024-07-15 15:41:06.143377] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:11.173 [2024-07-15 15:41:06.229415] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:11.173 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.173 [2024-07-15 15:41:06.292804] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:11.173 [2024-07-15 15:41:06.292831] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:11.173 [2024-07-15 15:41:06.292839] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:11.432 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:11.432 15:41:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.369 [2024-07-15 15:41:07.445481] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:12.369 [2024-07-15 15:41:07.445536] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:12.369 [2024-07-15 15:41:07.450890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.369 [2024-07-15 15:41:07.450930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.369 [2024-07-15 15:41:07.450944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.369 [2024-07-15 15:41:07.450954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.369 [2024-07-15 15:41:07.450965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.369 [2024-07-15 15:41:07.450974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.369 [2024-07-15 15:41:07.450984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.369 [2024-07-15 15:41:07.450994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.369 [2024-07-15 15:41:07.451003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebc50 is same with the state(5) to be set 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:12.369 [2024-07-15 15:41:07.460846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebc50 (9): Bad file descriptor 00:18:12.369 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.369 [2024-07-15 15:41:07.470864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.370 [2024-07-15 15:41:07.470984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.370 [2024-07-15 15:41:07.471009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebc50 with addr=10.0.0.2, port=4420 00:18:12.370 [2024-07-15 15:41:07.471020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebc50 is same with the state(5) to be set 00:18:12.370 [2024-07-15 15:41:07.471038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebc50 (9): Bad file descriptor 00:18:12.370 [2024-07-15 15:41:07.471054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.370 [2024-07-15 15:41:07.471064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.370 [2024-07-15 15:41:07.471075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.370 [2024-07-15 15:41:07.471091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.370 [2024-07-15 15:41:07.480922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.370 [2024-07-15 15:41:07.481038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.370 [2024-07-15 15:41:07.481060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebc50 with addr=10.0.0.2, port=4420 00:18:12.370 [2024-07-15 15:41:07.481071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebc50 is same with the state(5) to be set 00:18:12.370 [2024-07-15 15:41:07.481088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebc50 (9): Bad file descriptor 00:18:12.370 [2024-07-15 15:41:07.481103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.370 [2024-07-15 15:41:07.481112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.370 [2024-07-15 15:41:07.481122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.370 [2024-07-15 15:41:07.481137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.370 [2024-07-15 15:41:07.490991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.370 [2024-07-15 15:41:07.491098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.370 [2024-07-15 15:41:07.491135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebc50 with addr=10.0.0.2, port=4420 00:18:12.370 [2024-07-15 15:41:07.491146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebc50 is same with the state(5) to be set 00:18:12.370 [2024-07-15 15:41:07.491163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebc50 (9): Bad file descriptor 00:18:12.370 [2024-07-15 15:41:07.491177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.370 [2024-07-15 15:41:07.491186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.370 [2024-07-15 15:41:07.491195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.370 [2024-07-15 15:41:07.491210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.630 [2024-07-15 15:41:07.501060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.630 [2024-07-15 15:41:07.501189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.630 [2024-07-15 15:41:07.501210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebc50 with addr=10.0.0.2, port=4420 00:18:12.630 [2024-07-15 15:41:07.501228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebc50 is same with the state(5) to be set 00:18:12.630 [2024-07-15 15:41:07.501244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebc50 (9): Bad file descriptor 00:18:12.630 [2024-07-15 15:41:07.501258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.630 [2024-07-15 15:41:07.501267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.630 [2024-07-15 15:41:07.501276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.630 [2024-07-15 15:41:07.501291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:12.630 [2024-07-15 15:41:07.511142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.630 [2024-07-15 15:41:07.511231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:12.630 [2024-07-15 15:41:07.511251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebc50 with addr=10.0.0.2, port=4420 00:18:12.630 [2024-07-15 15:41:07.511262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebc50 is same with the state(5) to be set 00:18:12.630 [2024-07-15 15:41:07.511278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebc50 (9): Bad file descriptor 00:18:12.630 [2024-07-15 15:41:07.511292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.630 [2024-07-15 15:41:07.511300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.630 [2024-07-15 15:41:07.511309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.630 [2024-07-15 15:41:07.511323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.630 [2024-07-15 15:41:07.521202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.630 [2024-07-15 15:41:07.521295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.630 [2024-07-15 15:41:07.521317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebc50 with addr=10.0.0.2, port=4420 00:18:12.630 [2024-07-15 15:41:07.521329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebc50 is same with the state(5) to be set 00:18:12.630 [2024-07-15 15:41:07.521346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebc50 (9): Bad file descriptor 00:18:12.630 [2024-07-15 15:41:07.521361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.630 [2024-07-15 15:41:07.521370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.630 [2024-07-15 15:41:07.521380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.630 [2024-07-15 15:41:07.521395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.630 [2024-07-15 15:41:07.531259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.630 [2024-07-15 15:41:07.531372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.630 [2024-07-15 15:41:07.531394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebc50 with addr=10.0.0.2, port=4420 00:18:12.630 [2024-07-15 15:41:07.531405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebc50 is same with the state(5) to be set 00:18:12.630 [2024-07-15 15:41:07.531422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebc50 (9): Bad file descriptor 00:18:12.630 [2024-07-15 15:41:07.531437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.630 [2024-07-15 15:41:07.531447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.630 [2024-07-15 15:41:07.531456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.630 [2024-07-15 15:41:07.531470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.630 [2024-07-15 15:41:07.532064] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:12.630 [2024-07-15 15:41:07.532093] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.630 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.631 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.891 15:41:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:13.828 [2024-07-15 15:41:08.890180] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:13.828 [2024-07-15 15:41:08.890205] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:13.828 [2024-07-15 15:41:08.890222] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:14.087 [2024-07-15 15:41:08.976281] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:14.087 [2024-07-15 15:41:09.036355] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:14.087 [2024-07-15 15:41:09.036395] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.087 2024/07/15 15:41:09 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:14.087 request: 00:18:14.087 { 00:18:14.087 "method": "bdev_nvme_start_discovery", 00:18:14.087 "params": { 00:18:14.087 "name": "nvme", 00:18:14.087 "trtype": "tcp", 00:18:14.087 "traddr": "10.0.0.2", 00:18:14.087 "adrfam": "ipv4", 00:18:14.087 "trsvcid": "8009", 00:18:14.087 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:14.087 "wait_for_attach": true 00:18:14.087 } 00:18:14.087 } 00:18:14.087 Got JSON-RPC error response 00:18:14.087 GoRPCClient: error on JSON-RPC call 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.087 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.088 2024/07/15 15:41:09 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:14.088 request: 00:18:14.088 { 00:18:14.088 "method": "bdev_nvme_start_discovery", 00:18:14.088 "params": { 00:18:14.088 "name": "nvme_second", 00:18:14.088 "trtype": "tcp", 00:18:14.088 "traddr": "10.0.0.2", 00:18:14.088 "adrfam": "ipv4", 00:18:14.088 "trsvcid": "8009", 00:18:14.088 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:14.088 "wait_for_attach": true 00:18:14.088 } 00:18:14.088 } 00:18:14.088 Got JSON-RPC error response 00:18:14.088 GoRPCClient: error on JSON-RPC call 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:14.088 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.347 15:41:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.285 [2024-07-15 15:41:10.305567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.285 [2024-07-15 15:41:10.305639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce8540 with addr=10.0.0.2, port=8010 00:18:15.285 [2024-07-15 15:41:10.305659] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:15.285 [2024-07-15 15:41:10.305668] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:15.285 [2024-07-15 15:41:10.305676] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:16.221 [2024-07-15 15:41:11.305528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.221 [2024-07-15 15:41:11.305629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce8540 with addr=10.0.0.2, port=8010 00:18:16.221 [2024-07-15 15:41:11.305649] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:16.221 [2024-07-15 15:41:11.305659] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:16.221 [2024-07-15 15:41:11.305667] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:17.595 [2024-07-15 15:41:12.305403] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:17.595 2024/07/15 15:41:12 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:17.595 request: 00:18:17.595 { 00:18:17.595 "method": "bdev_nvme_start_discovery", 00:18:17.595 "params": { 00:18:17.595 "name": "nvme_second", 00:18:17.595 "trtype": "tcp", 00:18:17.595 "traddr": "10.0.0.2", 00:18:17.595 "adrfam": "ipv4", 00:18:17.595 "trsvcid": "8010", 00:18:17.595 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:17.595 "wait_for_attach": false, 00:18:17.595 "attach_timeout_ms": 3000 00:18:17.595 } 00:18:17.595 } 00:18:17.595 Got JSON-RPC error response 00:18:17.595 GoRPCClient: error on JSON-RPC call 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88269 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.595 rmmod nvme_tcp 00:18:17.595 rmmod nvme_fabrics 00:18:17.595 rmmod nvme_keyring 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88238 ']' 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88238 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88238 ']' 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88238 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88238 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88238' 00:18:17.595 killing process with pid 88238 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88238 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88238 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:17.595 00:18:17.595 real 0m10.436s 00:18:17.595 user 0m21.237s 00:18:17.595 sys 0m1.545s 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:17.595 ************************************ 00:18:17.595 END TEST nvmf_host_discovery 00:18:17.595 ************************************ 00:18:17.595 15:41:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:17.595 15:41:12 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:17.595 15:41:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:17.595 15:41:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.595 15:41:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:17.595 ************************************ 00:18:17.595 START TEST nvmf_host_multipath_status 00:18:17.595 ************************************ 00:18:17.595 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:17.855 * Looking for test storage... 00:18:17.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:17.855 Cannot find device "nvmf_tgt_br" 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:17.855 Cannot find device "nvmf_tgt_br2" 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:17.855 Cannot find device "nvmf_tgt_br" 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:17.855 Cannot find device "nvmf_tgt_br2" 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.855 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:18.114 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:18.114 15:41:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:18.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:18.114 00:18:18.114 --- 10.0.0.2 ping statistics --- 00:18:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.114 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:18.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:18.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:18.114 00:18:18.114 --- 10.0.0.3 ping statistics --- 00:18:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.114 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:18.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:18.114 00:18:18.114 --- 10.0.0.1 ping statistics --- 00:18:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.114 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:18.114 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:18.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=88754 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 88754 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 88754 ']' 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.115 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:18.115 [2024-07-15 15:41:13.203634] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:18.115 [2024-07-15 15:41:13.203728] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.373 [2024-07-15 15:41:13.341056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:18.373 [2024-07-15 15:41:13.397484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.373 [2024-07-15 15:41:13.397545] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.373 [2024-07-15 15:41:13.397556] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.373 [2024-07-15 15:41:13.397564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.373 [2024-07-15 15:41:13.397571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.373 [2024-07-15 15:41:13.397780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.373 [2024-07-15 15:41:13.397805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.373 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.373 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:18.373 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.373 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.373 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:18.631 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.631 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88754 00:18:18.631 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:18.889 [2024-07-15 15:41:13.794070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.889 15:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:19.148 Malloc0 00:18:19.148 15:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:19.426 15:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:19.723 15:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.029 [2024-07-15 15:41:14.849461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.029 15:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:20.029 [2024-07-15 15:41:15.073512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:20.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=88839 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 88839 /var/tmp/bdevperf.sock 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 88839 ']' 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.029 15:41:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:20.967 15:41:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.967 15:41:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:20.967 15:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:21.226 15:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:21.486 Nvme0n1 00:18:21.745 15:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:22.004 Nvme0n1 00:18:22.004 15:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:22.004 15:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:23.905 15:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:23.905 15:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:24.163 15:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:24.421 15:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:25.794 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:25.794 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:25.794 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.794 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:25.794 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:25.794 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:25.794 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.794 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:26.052 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:26.052 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:26.052 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.052 15:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:26.310 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.310 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:26.310 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.310 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:26.569 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.569 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:26.569 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:26.569 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.569 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.569 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:26.569 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.569 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:27.138 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:27.138 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:27.138 15:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:27.138 15:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:27.397 15:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:28.772 15:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:28.772 15:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:28.772 15:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.772 15:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:28.772 15:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:28.772 15:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:28.772 15:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.772 15:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:29.030 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.030 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:29.030 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:29.030 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.288 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.288 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:29.288 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.288 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:29.554 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.554 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:29.554 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.554 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:29.832 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.832 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:29.832 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.832 15:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:30.091 15:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.091 15:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:30.091 15:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:30.349 15:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:30.349 15:41:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:31.724 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:31.724 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:31.724 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.724 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:31.724 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.724 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:31.724 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.724 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:31.983 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:31.983 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:31.983 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.983 15:41:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:32.241 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:32.241 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:32.241 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:32.241 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.500 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:32.500 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:32.500 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.500 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:32.500 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:32.500 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:32.500 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:32.759 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:32.759 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:32.759 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:32.759 15:41:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:33.017 15:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:33.276 15:41:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:34.211 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:34.211 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:34.211 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.211 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:34.469 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.469 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:34.469 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:34.469 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.727 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:34.727 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:34.727 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.727 15:41:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:35.294 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:35.294 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:35.294 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:35.294 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.294 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:35.294 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:35.294 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.294 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:35.552 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:35.552 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:35.553 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.553 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:35.813 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:35.813 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:35.813 15:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:36.071 15:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:36.330 15:41:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:37.266 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:37.266 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:37.266 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.266 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:37.524 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:37.524 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:37.524 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.524 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:37.783 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:37.783 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:37.783 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:37.783 15:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.042 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.042 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:38.042 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.042 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:38.301 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.301 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:38.301 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.301 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:38.559 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:38.559 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:38.559 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.559 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:38.818 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:38.818 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:38.818 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:39.076 15:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:39.076 15:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:40.453 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:40.453 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:40.453 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.453 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:40.453 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:40.453 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:40.453 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.453 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:40.712 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.712 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:40.712 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.712 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:40.970 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.970 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:40.970 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.970 15:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:41.229 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.229 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:41.229 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:41.229 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.487 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:41.487 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:41.487 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:41.487 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.746 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.746 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:42.005 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:42.005 15:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:42.265 15:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:42.524 15:41:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:43.460 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:43.460 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:43.461 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.461 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:43.719 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.719 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:43.719 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.719 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:43.977 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.977 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:43.977 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.977 15:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:43.977 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.977 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:43.977 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:43.977 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.236 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.236 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:44.236 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.236 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:44.496 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.496 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:44.496 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.496 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:44.755 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.755 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:44.755 15:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:45.014 15:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:45.273 15:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:46.208 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:46.208 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:46.208 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.208 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:46.466 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:46.466 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:46.466 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.466 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:46.723 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.723 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:46.723 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.723 15:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:46.981 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.981 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:46.981 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.981 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:47.239 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.239 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:47.239 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.239 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:47.502 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.502 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:47.502 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.502 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:47.759 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.759 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:47.759 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:47.759 15:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:48.017 15:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:49.392 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:49.392 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:49.392 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.392 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:49.392 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.392 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:49.392 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.392 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:49.650 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.650 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:49.650 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.651 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:49.909 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.909 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:49.909 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:49.909 15:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.167 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.167 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:50.167 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.167 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:50.426 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.426 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:50.426 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.426 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:50.426 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.426 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:50.426 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:50.685 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:50.944 15:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:51.879 15:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:51.880 15:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:51.880 15:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:51.880 15:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.139 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.139 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:52.139 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.139 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:52.398 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:52.398 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:52.398 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.398 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:52.965 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.965 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:52.965 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.965 15:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:52.965 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.965 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:52.965 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.965 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:53.224 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.224 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:53.224 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.224 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 88839 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 88839 ']' 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 88839 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88839 00:18:53.484 killing process with pid 88839 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88839' 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 88839 00:18:53.484 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 88839 00:18:53.484 Connection closed with partial response: 00:18:53.484 00:18:53.484 00:18:53.746 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 88839 00:18:53.746 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:53.746 [2024-07-15 15:41:15.148734] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:53.746 [2024-07-15 15:41:15.148854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88839 ] 00:18:53.746 [2024-07-15 15:41:15.290489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.746 [2024-07-15 15:41:15.362295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.746 Running I/O for 90 seconds... 00:18:53.746 [2024-07-15 15:41:31.156022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.746 [2024-07-15 15:41:31.156630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:53.746 [2024-07-15 15:41:31.156651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.156666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.156689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.156704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.156726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.156741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.157967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.157987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.158971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.747 [2024-07-15 15:41:31.158987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:53.747 [2024-07-15 15:41:31.159024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.159966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.159990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.160976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.160994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.161029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.161043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:53.748 [2024-07-15 15:41:31.161074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.748 [2024-07-15 15:41:31.161090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.161973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.161987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:31.162382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:31.162396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:53.749 [2024-07-15 15:41:45.957548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.749 [2024-07-15 15:41:45.957579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.957618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.957637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.957659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.957674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.957695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.957710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.957731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.957746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.957767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.957781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.957802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.957817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.957852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.957869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.957905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.957934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.957969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.957983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.958031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.958063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.958095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.958127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.958159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.958193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.958225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.958258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.958290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.958309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.750 [2024-07-15 15:41:45.958330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.750 [2024-07-15 15:41:45.960478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:53.750 [2024-07-15 15:41:45.960496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.960521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.960571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.960604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.960636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.960669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.960701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.960733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.960765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.960798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.960831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.960862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.960881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.960894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.961410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.961463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.961496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.961541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.961576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.961609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.961640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.961672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.751 [2024-07-15 15:41:45.961705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.961737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:53.751 [2024-07-15 15:41:45.961757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:53.751 [2024-07-15 15:41:45.961770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:53.751 Received shutdown signal, test time was about 31.425327 seconds 00:18:53.751 00:18:53.751 Latency(us) 00:18:53.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.751 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:53.751 Verification LBA range: start 0x0 length 0x4000 00:18:53.751 Nvme0n1 : 31.42 9461.66 36.96 0.00 0.00 13501.13 180.60 4026531.84 00:18:53.751 =================================================================================================================== 00:18:53.751 Total : 9461.66 36.96 0.00 0.00 13501.13 180.60 4026531.84 00:18:53.751 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:54.011 rmmod nvme_tcp 00:18:54.011 rmmod nvme_fabrics 00:18:54.011 rmmod nvme_keyring 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 88754 ']' 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 88754 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 88754 ']' 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 88754 00:18:54.011 15:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:54.011 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:54.011 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88754 00:18:54.011 killing process with pid 88754 00:18:54.011 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:54.011 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:54.011 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88754' 00:18:54.011 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 88754 00:18:54.011 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 88754 00:18:54.270 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:54.271 00:18:54.271 real 0m36.489s 00:18:54.271 user 2m0.050s 00:18:54.271 sys 0m8.603s 00:18:54.271 ************************************ 00:18:54.271 END TEST nvmf_host_multipath_status 00:18:54.271 ************************************ 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:54.271 15:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:54.271 15:41:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:54.271 15:41:49 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:54.271 15:41:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:54.271 15:41:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.271 15:41:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:54.271 ************************************ 00:18:54.271 START TEST nvmf_discovery_remove_ifc 00:18:54.271 ************************************ 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:54.271 * Looking for test storage... 00:18:54.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:54.271 Cannot find device "nvmf_tgt_br" 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:54.271 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:54.530 Cannot find device "nvmf_tgt_br2" 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:54.530 Cannot find device "nvmf_tgt_br" 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:54.530 Cannot find device "nvmf_tgt_br2" 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:54.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:54.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:54.530 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:54.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:54.788 00:18:54.788 --- 10.0.0.2 ping statistics --- 00:18:54.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.788 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:54.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:54.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:18:54.788 00:18:54.788 --- 10.0.0.3 ping statistics --- 00:18:54.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.788 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:54.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:54.788 00:18:54.788 --- 10.0.0.1 ping statistics --- 00:18:54.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.788 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:54.788 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90116 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90116 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90116 ']' 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.789 15:41:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.789 [2024-07-15 15:41:49.784641] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:54.789 [2024-07-15 15:41:49.784727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.047 [2024-07-15 15:41:49.920672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.047 [2024-07-15 15:41:49.970530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.047 [2024-07-15 15:41:49.970587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.047 [2024-07-15 15:41:49.970597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.047 [2024-07-15 15:41:49.970603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.047 [2024-07-15 15:41:49.970610] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.047 [2024-07-15 15:41:49.970636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.047 [2024-07-15 15:41:50.106673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.047 [2024-07-15 15:41:50.114815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:55.047 null0 00:18:55.047 [2024-07-15 15:41:50.146756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90152 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90152 /tmp/host.sock 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90152 ']' 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.047 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.047 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.306 [2024-07-15 15:41:50.216659] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:55.306 [2024-07-15 15:41:50.216735] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90152 ] 00:18:55.306 [2024-07-15 15:41:50.348378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.306 [2024-07-15 15:41:50.398750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.564 15:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:56.499 [2024-07-15 15:41:51.539495] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:56.500 [2024-07-15 15:41:51.539519] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:56.500 [2024-07-15 15:41:51.539584] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:56.500 [2024-07-15 15:41:51.625617] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:56.758 [2024-07-15 15:41:51.681866] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:56.758 [2024-07-15 15:41:51.681953] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:56.758 [2024-07-15 15:41:51.681979] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:56.758 [2024-07-15 15:41:51.681993] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:56.758 [2024-07-15 15:41:51.682012] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:56.758 [2024-07-15 15:41:51.688191] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d6f650 was disconnected and freed. delete nvme_qpair. 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:56.758 15:41:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:57.693 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:57.693 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:57.693 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:57.693 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.693 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:57.693 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:57.693 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:57.952 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.952 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:57.952 15:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:58.887 15:41:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:59.823 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:59.823 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:59.823 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.823 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:59.823 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:59.823 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:59.823 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:00.081 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.081 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:00.081 15:41:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:01.014 15:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:01.014 15:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:01.014 15:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.014 15:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:01.014 15:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:01.014 15:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:01.014 15:41:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:01.014 15:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.014 15:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:01.014 15:41:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:01.945 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:01.945 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:01.945 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:01.945 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.945 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:01.945 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:01.945 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:02.202 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.202 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:02.202 15:41:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:02.202 [2024-07-15 15:41:57.111129] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:02.202 [2024-07-15 15:41:57.111199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.202 [2024-07-15 15:41:57.111213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.202 [2024-07-15 15:41:57.111224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.202 [2024-07-15 15:41:57.111232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.202 [2024-07-15 15:41:57.111240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.202 [2024-07-15 15:41:57.111248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.202 [2024-07-15 15:41:57.111257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.202 [2024-07-15 15:41:57.111265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.202 [2024-07-15 15:41:57.111274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.202 [2024-07-15 15:41:57.111281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.202 [2024-07-15 15:41:57.111289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38900 is same with the state(5) to be set 00:19:02.203 [2024-07-15 15:41:57.121112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d38900 (9): Bad file descriptor 00:19:02.203 [2024-07-15 15:41:57.131139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:03.164 [2024-07-15 15:41:58.148565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:03.164 [2024-07-15 15:41:58.148644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d38900 with addr=10.0.0.2, port=4420 00:19:03.164 [2024-07-15 15:41:58.148659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38900 is same with the state(5) to be set 00:19:03.164 [2024-07-15 15:41:58.148690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d38900 (9): Bad file descriptor 00:19:03.164 [2024-07-15 15:41:58.149081] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:03.164 [2024-07-15 15:41:58.149106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:03.164 [2024-07-15 15:41:58.149116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:03.164 [2024-07-15 15:41:58.149125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:03.164 [2024-07-15 15:41:58.149143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.164 [2024-07-15 15:41:58.149153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:03.164 15:41:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:04.146 [2024-07-15 15:41:59.149187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:04.146 [2024-07-15 15:41:59.149236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:04.146 [2024-07-15 15:41:59.149262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:04.146 [2024-07-15 15:41:59.149271] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:19:04.146 [2024-07-15 15:41:59.149286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.146 [2024-07-15 15:41:59.149309] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:04.146 [2024-07-15 15:41:59.149342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.146 [2024-07-15 15:41:59.149355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.146 [2024-07-15 15:41:59.149367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.146 [2024-07-15 15:41:59.149374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.146 [2024-07-15 15:41:59.149383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.146 [2024-07-15 15:41:59.149390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.146 [2024-07-15 15:41:59.149398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.146 [2024-07-15 15:41:59.149406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.146 [2024-07-15 15:41:59.149414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.146 [2024-07-15 15:41:59.149421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.146 [2024-07-15 15:41:59.149445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:04.146 [2024-07-15 15:41:59.149614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdb3e0 (9): Bad file descriptor 00:19:04.146 [2024-07-15 15:41:59.150627] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:04.146 [2024-07-15 15:41:59.150667] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:04.146 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.405 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:04.405 15:41:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:05.340 15:42:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:06.274 [2024-07-15 15:42:01.161006] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:06.274 [2024-07-15 15:42:01.161033] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:06.274 [2024-07-15 15:42:01.161068] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:06.274 [2024-07-15 15:42:01.247143] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:06.274 [2024-07-15 15:42:01.302873] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:06.274 [2024-07-15 15:42:01.302938] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:06.274 [2024-07-15 15:42:01.302962] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:06.274 [2024-07-15 15:42:01.302978] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:06.274 [2024-07-15 15:42:01.302987] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:06.274 [2024-07-15 15:42:01.309436] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d54300 was disconnected and freed. delete nvme_qpair. 00:19:06.274 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:06.274 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:06.274 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.274 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:06.274 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:06.274 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:06.274 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90152 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90152 ']' 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90152 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90152 00:19:06.533 killing process with pid 90152 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90152' 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90152 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90152 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.533 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.533 rmmod nvme_tcp 00:19:06.792 rmmod nvme_fabrics 00:19:06.792 rmmod nvme_keyring 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90116 ']' 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90116 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90116 ']' 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90116 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90116 00:19:06.792 killing process with pid 90116 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90116' 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90116 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90116 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:06.792 00:19:06.792 real 0m12.656s 00:19:06.792 user 0m22.902s 00:19:06.792 sys 0m1.441s 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.792 ************************************ 00:19:06.792 END TEST nvmf_discovery_remove_ifc 00:19:06.792 ************************************ 00:19:06.792 15:42:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:07.051 15:42:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:07.051 15:42:01 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:07.051 15:42:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:07.051 15:42:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.051 15:42:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:07.051 ************************************ 00:19:07.051 START TEST nvmf_identify_kernel_target 00:19:07.051 ************************************ 00:19:07.051 15:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:07.051 * Looking for test storage... 00:19:07.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.051 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:07.052 Cannot find device "nvmf_tgt_br" 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.052 Cannot find device "nvmf_tgt_br2" 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:07.052 Cannot find device "nvmf_tgt_br" 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:07.052 Cannot find device "nvmf_tgt_br2" 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:07.052 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:07.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:19:07.311 00:19:07.311 --- 10.0.0.2 ping statistics --- 00:19:07.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.311 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:07.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.311 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:07.311 00:19:07.311 --- 10.0.0.3 ping statistics --- 00:19:07.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.311 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:07.311 00:19:07.311 --- 10.0.0.1 ping statistics --- 00:19:07.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.311 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.311 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:07.569 15:42:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:07.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:07.826 Waiting for block devices as requested 00:19:07.826 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:08.083 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:08.083 No valid GPT data, bailing 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:08.083 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:08.084 No valid GPT data, bailing 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:08.084 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:08.340 No valid GPT data, bailing 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:08.340 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:08.341 No valid GPT data, bailing 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -a 10.0.0.1 -t tcp -s 4420 00:19:08.341 00:19:08.341 Discovery Log Number of Records 2, Generation counter 2 00:19:08.341 =====Discovery Log Entry 0====== 00:19:08.341 trtype: tcp 00:19:08.341 adrfam: ipv4 00:19:08.341 subtype: current discovery subsystem 00:19:08.341 treq: not specified, sq flow control disable supported 00:19:08.341 portid: 1 00:19:08.341 trsvcid: 4420 00:19:08.341 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:08.341 traddr: 10.0.0.1 00:19:08.341 eflags: none 00:19:08.341 sectype: none 00:19:08.341 =====Discovery Log Entry 1====== 00:19:08.341 trtype: tcp 00:19:08.341 adrfam: ipv4 00:19:08.341 subtype: nvme subsystem 00:19:08.341 treq: not specified, sq flow control disable supported 00:19:08.341 portid: 1 00:19:08.341 trsvcid: 4420 00:19:08.341 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:08.341 traddr: 10.0.0.1 00:19:08.341 eflags: none 00:19:08.341 sectype: none 00:19:08.341 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:08.341 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:08.599 ===================================================== 00:19:08.599 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:08.599 ===================================================== 00:19:08.599 Controller Capabilities/Features 00:19:08.599 ================================ 00:19:08.599 Vendor ID: 0000 00:19:08.599 Subsystem Vendor ID: 0000 00:19:08.599 Serial Number: 3c81a24daceeaf328214 00:19:08.599 Model Number: Linux 00:19:08.599 Firmware Version: 6.7.0-68 00:19:08.599 Recommended Arb Burst: 0 00:19:08.599 IEEE OUI Identifier: 00 00 00 00:19:08.599 Multi-path I/O 00:19:08.599 May have multiple subsystem ports: No 00:19:08.599 May have multiple controllers: No 00:19:08.599 Associated with SR-IOV VF: No 00:19:08.599 Max Data Transfer Size: Unlimited 00:19:08.599 Max Number of Namespaces: 0 00:19:08.599 Max Number of I/O Queues: 1024 00:19:08.599 NVMe Specification Version (VS): 1.3 00:19:08.599 NVMe Specification Version (Identify): 1.3 00:19:08.599 Maximum Queue Entries: 1024 00:19:08.599 Contiguous Queues Required: No 00:19:08.599 Arbitration Mechanisms Supported 00:19:08.599 Weighted Round Robin: Not Supported 00:19:08.599 Vendor Specific: Not Supported 00:19:08.599 Reset Timeout: 7500 ms 00:19:08.599 Doorbell Stride: 4 bytes 00:19:08.599 NVM Subsystem Reset: Not Supported 00:19:08.599 Command Sets Supported 00:19:08.599 NVM Command Set: Supported 00:19:08.599 Boot Partition: Not Supported 00:19:08.599 Memory Page Size Minimum: 4096 bytes 00:19:08.599 Memory Page Size Maximum: 4096 bytes 00:19:08.599 Persistent Memory Region: Not Supported 00:19:08.599 Optional Asynchronous Events Supported 00:19:08.599 Namespace Attribute Notices: Not Supported 00:19:08.599 Firmware Activation Notices: Not Supported 00:19:08.599 ANA Change Notices: Not Supported 00:19:08.599 PLE Aggregate Log Change Notices: Not Supported 00:19:08.599 LBA Status Info Alert Notices: Not Supported 00:19:08.599 EGE Aggregate Log Change Notices: Not Supported 00:19:08.599 Normal NVM Subsystem Shutdown event: Not Supported 00:19:08.599 Zone Descriptor Change Notices: Not Supported 00:19:08.599 Discovery Log Change Notices: Supported 00:19:08.599 Controller Attributes 00:19:08.599 128-bit Host Identifier: Not Supported 00:19:08.599 Non-Operational Permissive Mode: Not Supported 00:19:08.599 NVM Sets: Not Supported 00:19:08.599 Read Recovery Levels: Not Supported 00:19:08.599 Endurance Groups: Not Supported 00:19:08.599 Predictable Latency Mode: Not Supported 00:19:08.599 Traffic Based Keep ALive: Not Supported 00:19:08.599 Namespace Granularity: Not Supported 00:19:08.599 SQ Associations: Not Supported 00:19:08.599 UUID List: Not Supported 00:19:08.599 Multi-Domain Subsystem: Not Supported 00:19:08.599 Fixed Capacity Management: Not Supported 00:19:08.599 Variable Capacity Management: Not Supported 00:19:08.599 Delete Endurance Group: Not Supported 00:19:08.599 Delete NVM Set: Not Supported 00:19:08.599 Extended LBA Formats Supported: Not Supported 00:19:08.599 Flexible Data Placement Supported: Not Supported 00:19:08.599 00:19:08.599 Controller Memory Buffer Support 00:19:08.599 ================================ 00:19:08.599 Supported: No 00:19:08.599 00:19:08.599 Persistent Memory Region Support 00:19:08.599 ================================ 00:19:08.599 Supported: No 00:19:08.599 00:19:08.599 Admin Command Set Attributes 00:19:08.599 ============================ 00:19:08.599 Security Send/Receive: Not Supported 00:19:08.599 Format NVM: Not Supported 00:19:08.599 Firmware Activate/Download: Not Supported 00:19:08.599 Namespace Management: Not Supported 00:19:08.599 Device Self-Test: Not Supported 00:19:08.599 Directives: Not Supported 00:19:08.599 NVMe-MI: Not Supported 00:19:08.599 Virtualization Management: Not Supported 00:19:08.599 Doorbell Buffer Config: Not Supported 00:19:08.599 Get LBA Status Capability: Not Supported 00:19:08.599 Command & Feature Lockdown Capability: Not Supported 00:19:08.599 Abort Command Limit: 1 00:19:08.599 Async Event Request Limit: 1 00:19:08.599 Number of Firmware Slots: N/A 00:19:08.599 Firmware Slot 1 Read-Only: N/A 00:19:08.599 Firmware Activation Without Reset: N/A 00:19:08.599 Multiple Update Detection Support: N/A 00:19:08.599 Firmware Update Granularity: No Information Provided 00:19:08.599 Per-Namespace SMART Log: No 00:19:08.599 Asymmetric Namespace Access Log Page: Not Supported 00:19:08.599 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:08.599 Command Effects Log Page: Not Supported 00:19:08.599 Get Log Page Extended Data: Supported 00:19:08.599 Telemetry Log Pages: Not Supported 00:19:08.599 Persistent Event Log Pages: Not Supported 00:19:08.599 Supported Log Pages Log Page: May Support 00:19:08.599 Commands Supported & Effects Log Page: Not Supported 00:19:08.599 Feature Identifiers & Effects Log Page:May Support 00:19:08.599 NVMe-MI Commands & Effects Log Page: May Support 00:19:08.599 Data Area 4 for Telemetry Log: Not Supported 00:19:08.599 Error Log Page Entries Supported: 1 00:19:08.599 Keep Alive: Not Supported 00:19:08.599 00:19:08.599 NVM Command Set Attributes 00:19:08.599 ========================== 00:19:08.599 Submission Queue Entry Size 00:19:08.599 Max: 1 00:19:08.599 Min: 1 00:19:08.599 Completion Queue Entry Size 00:19:08.599 Max: 1 00:19:08.599 Min: 1 00:19:08.599 Number of Namespaces: 0 00:19:08.599 Compare Command: Not Supported 00:19:08.599 Write Uncorrectable Command: Not Supported 00:19:08.599 Dataset Management Command: Not Supported 00:19:08.599 Write Zeroes Command: Not Supported 00:19:08.599 Set Features Save Field: Not Supported 00:19:08.599 Reservations: Not Supported 00:19:08.599 Timestamp: Not Supported 00:19:08.599 Copy: Not Supported 00:19:08.599 Volatile Write Cache: Not Present 00:19:08.599 Atomic Write Unit (Normal): 1 00:19:08.599 Atomic Write Unit (PFail): 1 00:19:08.599 Atomic Compare & Write Unit: 1 00:19:08.599 Fused Compare & Write: Not Supported 00:19:08.599 Scatter-Gather List 00:19:08.599 SGL Command Set: Supported 00:19:08.599 SGL Keyed: Not Supported 00:19:08.599 SGL Bit Bucket Descriptor: Not Supported 00:19:08.599 SGL Metadata Pointer: Not Supported 00:19:08.599 Oversized SGL: Not Supported 00:19:08.599 SGL Metadata Address: Not Supported 00:19:08.599 SGL Offset: Supported 00:19:08.599 Transport SGL Data Block: Not Supported 00:19:08.599 Replay Protected Memory Block: Not Supported 00:19:08.599 00:19:08.599 Firmware Slot Information 00:19:08.599 ========================= 00:19:08.599 Active slot: 0 00:19:08.599 00:19:08.599 00:19:08.599 Error Log 00:19:08.599 ========= 00:19:08.599 00:19:08.599 Active Namespaces 00:19:08.599 ================= 00:19:08.599 Discovery Log Page 00:19:08.599 ================== 00:19:08.599 Generation Counter: 2 00:19:08.599 Number of Records: 2 00:19:08.599 Record Format: 0 00:19:08.599 00:19:08.599 Discovery Log Entry 0 00:19:08.599 ---------------------- 00:19:08.599 Transport Type: 3 (TCP) 00:19:08.599 Address Family: 1 (IPv4) 00:19:08.599 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:08.599 Entry Flags: 00:19:08.599 Duplicate Returned Information: 0 00:19:08.599 Explicit Persistent Connection Support for Discovery: 0 00:19:08.599 Transport Requirements: 00:19:08.599 Secure Channel: Not Specified 00:19:08.599 Port ID: 1 (0x0001) 00:19:08.599 Controller ID: 65535 (0xffff) 00:19:08.599 Admin Max SQ Size: 32 00:19:08.599 Transport Service Identifier: 4420 00:19:08.599 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:08.599 Transport Address: 10.0.0.1 00:19:08.599 Discovery Log Entry 1 00:19:08.599 ---------------------- 00:19:08.599 Transport Type: 3 (TCP) 00:19:08.599 Address Family: 1 (IPv4) 00:19:08.599 Subsystem Type: 2 (NVM Subsystem) 00:19:08.599 Entry Flags: 00:19:08.599 Duplicate Returned Information: 0 00:19:08.599 Explicit Persistent Connection Support for Discovery: 0 00:19:08.599 Transport Requirements: 00:19:08.599 Secure Channel: Not Specified 00:19:08.599 Port ID: 1 (0x0001) 00:19:08.599 Controller ID: 65535 (0xffff) 00:19:08.599 Admin Max SQ Size: 32 00:19:08.599 Transport Service Identifier: 4420 00:19:08.600 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:08.600 Transport Address: 10.0.0.1 00:19:08.600 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:08.859 get_feature(0x01) failed 00:19:08.859 get_feature(0x02) failed 00:19:08.859 get_feature(0x04) failed 00:19:08.859 ===================================================== 00:19:08.859 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:08.859 ===================================================== 00:19:08.859 Controller Capabilities/Features 00:19:08.859 ================================ 00:19:08.859 Vendor ID: 0000 00:19:08.859 Subsystem Vendor ID: 0000 00:19:08.859 Serial Number: 8e7480705ea67b676baa 00:19:08.859 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:08.859 Firmware Version: 6.7.0-68 00:19:08.859 Recommended Arb Burst: 6 00:19:08.859 IEEE OUI Identifier: 00 00 00 00:19:08.859 Multi-path I/O 00:19:08.859 May have multiple subsystem ports: Yes 00:19:08.859 May have multiple controllers: Yes 00:19:08.859 Associated with SR-IOV VF: No 00:19:08.859 Max Data Transfer Size: Unlimited 00:19:08.859 Max Number of Namespaces: 1024 00:19:08.859 Max Number of I/O Queues: 128 00:19:08.859 NVMe Specification Version (VS): 1.3 00:19:08.859 NVMe Specification Version (Identify): 1.3 00:19:08.859 Maximum Queue Entries: 1024 00:19:08.859 Contiguous Queues Required: No 00:19:08.859 Arbitration Mechanisms Supported 00:19:08.859 Weighted Round Robin: Not Supported 00:19:08.859 Vendor Specific: Not Supported 00:19:08.859 Reset Timeout: 7500 ms 00:19:08.859 Doorbell Stride: 4 bytes 00:19:08.859 NVM Subsystem Reset: Not Supported 00:19:08.859 Command Sets Supported 00:19:08.859 NVM Command Set: Supported 00:19:08.859 Boot Partition: Not Supported 00:19:08.859 Memory Page Size Minimum: 4096 bytes 00:19:08.859 Memory Page Size Maximum: 4096 bytes 00:19:08.859 Persistent Memory Region: Not Supported 00:19:08.859 Optional Asynchronous Events Supported 00:19:08.859 Namespace Attribute Notices: Supported 00:19:08.859 Firmware Activation Notices: Not Supported 00:19:08.859 ANA Change Notices: Supported 00:19:08.859 PLE Aggregate Log Change Notices: Not Supported 00:19:08.859 LBA Status Info Alert Notices: Not Supported 00:19:08.859 EGE Aggregate Log Change Notices: Not Supported 00:19:08.859 Normal NVM Subsystem Shutdown event: Not Supported 00:19:08.859 Zone Descriptor Change Notices: Not Supported 00:19:08.859 Discovery Log Change Notices: Not Supported 00:19:08.859 Controller Attributes 00:19:08.859 128-bit Host Identifier: Supported 00:19:08.859 Non-Operational Permissive Mode: Not Supported 00:19:08.859 NVM Sets: Not Supported 00:19:08.859 Read Recovery Levels: Not Supported 00:19:08.859 Endurance Groups: Not Supported 00:19:08.859 Predictable Latency Mode: Not Supported 00:19:08.859 Traffic Based Keep ALive: Supported 00:19:08.859 Namespace Granularity: Not Supported 00:19:08.859 SQ Associations: Not Supported 00:19:08.859 UUID List: Not Supported 00:19:08.859 Multi-Domain Subsystem: Not Supported 00:19:08.859 Fixed Capacity Management: Not Supported 00:19:08.859 Variable Capacity Management: Not Supported 00:19:08.859 Delete Endurance Group: Not Supported 00:19:08.859 Delete NVM Set: Not Supported 00:19:08.859 Extended LBA Formats Supported: Not Supported 00:19:08.859 Flexible Data Placement Supported: Not Supported 00:19:08.859 00:19:08.859 Controller Memory Buffer Support 00:19:08.859 ================================ 00:19:08.859 Supported: No 00:19:08.859 00:19:08.859 Persistent Memory Region Support 00:19:08.859 ================================ 00:19:08.859 Supported: No 00:19:08.859 00:19:08.859 Admin Command Set Attributes 00:19:08.859 ============================ 00:19:08.859 Security Send/Receive: Not Supported 00:19:08.859 Format NVM: Not Supported 00:19:08.859 Firmware Activate/Download: Not Supported 00:19:08.859 Namespace Management: Not Supported 00:19:08.859 Device Self-Test: Not Supported 00:19:08.859 Directives: Not Supported 00:19:08.859 NVMe-MI: Not Supported 00:19:08.859 Virtualization Management: Not Supported 00:19:08.859 Doorbell Buffer Config: Not Supported 00:19:08.859 Get LBA Status Capability: Not Supported 00:19:08.859 Command & Feature Lockdown Capability: Not Supported 00:19:08.859 Abort Command Limit: 4 00:19:08.859 Async Event Request Limit: 4 00:19:08.859 Number of Firmware Slots: N/A 00:19:08.859 Firmware Slot 1 Read-Only: N/A 00:19:08.859 Firmware Activation Without Reset: N/A 00:19:08.859 Multiple Update Detection Support: N/A 00:19:08.859 Firmware Update Granularity: No Information Provided 00:19:08.859 Per-Namespace SMART Log: Yes 00:19:08.859 Asymmetric Namespace Access Log Page: Supported 00:19:08.859 ANA Transition Time : 10 sec 00:19:08.859 00:19:08.859 Asymmetric Namespace Access Capabilities 00:19:08.859 ANA Optimized State : Supported 00:19:08.859 ANA Non-Optimized State : Supported 00:19:08.859 ANA Inaccessible State : Supported 00:19:08.859 ANA Persistent Loss State : Supported 00:19:08.859 ANA Change State : Supported 00:19:08.859 ANAGRPID is not changed : No 00:19:08.859 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:08.859 00:19:08.859 ANA Group Identifier Maximum : 128 00:19:08.859 Number of ANA Group Identifiers : 128 00:19:08.859 Max Number of Allowed Namespaces : 1024 00:19:08.859 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:08.859 Command Effects Log Page: Supported 00:19:08.859 Get Log Page Extended Data: Supported 00:19:08.859 Telemetry Log Pages: Not Supported 00:19:08.859 Persistent Event Log Pages: Not Supported 00:19:08.859 Supported Log Pages Log Page: May Support 00:19:08.859 Commands Supported & Effects Log Page: Not Supported 00:19:08.859 Feature Identifiers & Effects Log Page:May Support 00:19:08.859 NVMe-MI Commands & Effects Log Page: May Support 00:19:08.859 Data Area 4 for Telemetry Log: Not Supported 00:19:08.859 Error Log Page Entries Supported: 128 00:19:08.859 Keep Alive: Supported 00:19:08.859 Keep Alive Granularity: 1000 ms 00:19:08.859 00:19:08.859 NVM Command Set Attributes 00:19:08.859 ========================== 00:19:08.859 Submission Queue Entry Size 00:19:08.859 Max: 64 00:19:08.859 Min: 64 00:19:08.859 Completion Queue Entry Size 00:19:08.859 Max: 16 00:19:08.859 Min: 16 00:19:08.859 Number of Namespaces: 1024 00:19:08.859 Compare Command: Not Supported 00:19:08.859 Write Uncorrectable Command: Not Supported 00:19:08.859 Dataset Management Command: Supported 00:19:08.859 Write Zeroes Command: Supported 00:19:08.859 Set Features Save Field: Not Supported 00:19:08.859 Reservations: Not Supported 00:19:08.859 Timestamp: Not Supported 00:19:08.859 Copy: Not Supported 00:19:08.859 Volatile Write Cache: Present 00:19:08.859 Atomic Write Unit (Normal): 1 00:19:08.859 Atomic Write Unit (PFail): 1 00:19:08.859 Atomic Compare & Write Unit: 1 00:19:08.859 Fused Compare & Write: Not Supported 00:19:08.859 Scatter-Gather List 00:19:08.859 SGL Command Set: Supported 00:19:08.859 SGL Keyed: Not Supported 00:19:08.859 SGL Bit Bucket Descriptor: Not Supported 00:19:08.859 SGL Metadata Pointer: Not Supported 00:19:08.859 Oversized SGL: Not Supported 00:19:08.859 SGL Metadata Address: Not Supported 00:19:08.859 SGL Offset: Supported 00:19:08.859 Transport SGL Data Block: Not Supported 00:19:08.859 Replay Protected Memory Block: Not Supported 00:19:08.859 00:19:08.859 Firmware Slot Information 00:19:08.859 ========================= 00:19:08.859 Active slot: 0 00:19:08.859 00:19:08.859 Asymmetric Namespace Access 00:19:08.859 =========================== 00:19:08.859 Change Count : 0 00:19:08.859 Number of ANA Group Descriptors : 1 00:19:08.859 ANA Group Descriptor : 0 00:19:08.859 ANA Group ID : 1 00:19:08.859 Number of NSID Values : 1 00:19:08.859 Change Count : 0 00:19:08.859 ANA State : 1 00:19:08.859 Namespace Identifier : 1 00:19:08.859 00:19:08.859 Commands Supported and Effects 00:19:08.859 ============================== 00:19:08.859 Admin Commands 00:19:08.859 -------------- 00:19:08.859 Get Log Page (02h): Supported 00:19:08.859 Identify (06h): Supported 00:19:08.859 Abort (08h): Supported 00:19:08.859 Set Features (09h): Supported 00:19:08.859 Get Features (0Ah): Supported 00:19:08.859 Asynchronous Event Request (0Ch): Supported 00:19:08.859 Keep Alive (18h): Supported 00:19:08.859 I/O Commands 00:19:08.859 ------------ 00:19:08.860 Flush (00h): Supported 00:19:08.860 Write (01h): Supported LBA-Change 00:19:08.860 Read (02h): Supported 00:19:08.860 Write Zeroes (08h): Supported LBA-Change 00:19:08.860 Dataset Management (09h): Supported 00:19:08.860 00:19:08.860 Error Log 00:19:08.860 ========= 00:19:08.860 Entry: 0 00:19:08.860 Error Count: 0x3 00:19:08.860 Submission Queue Id: 0x0 00:19:08.860 Command Id: 0x5 00:19:08.860 Phase Bit: 0 00:19:08.860 Status Code: 0x2 00:19:08.860 Status Code Type: 0x0 00:19:08.860 Do Not Retry: 1 00:19:08.860 Error Location: 0x28 00:19:08.860 LBA: 0x0 00:19:08.860 Namespace: 0x0 00:19:08.860 Vendor Log Page: 0x0 00:19:08.860 ----------- 00:19:08.860 Entry: 1 00:19:08.860 Error Count: 0x2 00:19:08.860 Submission Queue Id: 0x0 00:19:08.860 Command Id: 0x5 00:19:08.860 Phase Bit: 0 00:19:08.860 Status Code: 0x2 00:19:08.860 Status Code Type: 0x0 00:19:08.860 Do Not Retry: 1 00:19:08.860 Error Location: 0x28 00:19:08.860 LBA: 0x0 00:19:08.860 Namespace: 0x0 00:19:08.860 Vendor Log Page: 0x0 00:19:08.860 ----------- 00:19:08.860 Entry: 2 00:19:08.860 Error Count: 0x1 00:19:08.860 Submission Queue Id: 0x0 00:19:08.860 Command Id: 0x4 00:19:08.860 Phase Bit: 0 00:19:08.860 Status Code: 0x2 00:19:08.860 Status Code Type: 0x0 00:19:08.860 Do Not Retry: 1 00:19:08.860 Error Location: 0x28 00:19:08.860 LBA: 0x0 00:19:08.860 Namespace: 0x0 00:19:08.860 Vendor Log Page: 0x0 00:19:08.860 00:19:08.860 Number of Queues 00:19:08.860 ================ 00:19:08.860 Number of I/O Submission Queues: 128 00:19:08.860 Number of I/O Completion Queues: 128 00:19:08.860 00:19:08.860 ZNS Specific Controller Data 00:19:08.860 ============================ 00:19:08.860 Zone Append Size Limit: 0 00:19:08.860 00:19:08.860 00:19:08.860 Active Namespaces 00:19:08.860 ================= 00:19:08.860 get_feature(0x05) failed 00:19:08.860 Namespace ID:1 00:19:08.860 Command Set Identifier: NVM (00h) 00:19:08.860 Deallocate: Supported 00:19:08.860 Deallocated/Unwritten Error: Not Supported 00:19:08.860 Deallocated Read Value: Unknown 00:19:08.860 Deallocate in Write Zeroes: Not Supported 00:19:08.860 Deallocated Guard Field: 0xFFFF 00:19:08.860 Flush: Supported 00:19:08.860 Reservation: Not Supported 00:19:08.860 Namespace Sharing Capabilities: Multiple Controllers 00:19:08.860 Size (in LBAs): 1310720 (5GiB) 00:19:08.860 Capacity (in LBAs): 1310720 (5GiB) 00:19:08.860 Utilization (in LBAs): 1310720 (5GiB) 00:19:08.860 UUID: ef33e69b-e065-4eb2-a8f6-fea9b9450988 00:19:08.860 Thin Provisioning: Not Supported 00:19:08.860 Per-NS Atomic Units: Yes 00:19:08.860 Atomic Boundary Size (Normal): 0 00:19:08.860 Atomic Boundary Size (PFail): 0 00:19:08.860 Atomic Boundary Offset: 0 00:19:08.860 NGUID/EUI64 Never Reused: No 00:19:08.860 ANA group ID: 1 00:19:08.860 Namespace Write Protected: No 00:19:08.860 Number of LBA Formats: 1 00:19:08.860 Current LBA Format: LBA Format #00 00:19:08.860 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:08.860 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.860 rmmod nvme_tcp 00:19:08.860 rmmod nvme_fabrics 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:08.860 15:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:09.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:09.720 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.720 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.720 00:19:09.720 real 0m2.847s 00:19:09.720 user 0m0.975s 00:19:09.720 sys 0m1.350s 00:19:09.720 15:42:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:09.720 15:42:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.720 ************************************ 00:19:09.720 END TEST nvmf_identify_kernel_target 00:19:09.720 ************************************ 00:19:09.980 15:42:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:09.980 15:42:04 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:09.980 15:42:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:09.980 15:42:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.980 15:42:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:09.980 ************************************ 00:19:09.980 START TEST nvmf_auth_host 00:19:09.980 ************************************ 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:09.980 * Looking for test storage... 00:19:09.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:09.980 15:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:09.980 Cannot find device "nvmf_tgt_br" 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.980 Cannot find device "nvmf_tgt_br2" 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:09.980 Cannot find device "nvmf_tgt_br" 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:09.980 Cannot find device "nvmf_tgt_br2" 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:09.980 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:10.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:19:10.239 00:19:10.239 --- 10.0.0.2 ping statistics --- 00:19:10.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.239 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:10.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:19:10.239 00:19:10.239 --- 10.0.0.3 ping statistics --- 00:19:10.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.239 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:10.239 00:19:10.239 --- 10.0.0.1 ping statistics --- 00:19:10.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.239 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91032 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91032 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91032 ']' 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.239 15:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e0973d9545093c450a36d9ba607cb9b 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wtb 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e0973d9545093c450a36d9ba607cb9b 0 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e0973d9545093c450a36d9ba607cb9b 0 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e0973d9545093c450a36d9ba607cb9b 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wtb 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wtb 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.wtb 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0a76596aec340ec363521f9874db3be8db2f6053bfd6c9a48cd28df61c415acc 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uhS 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0a76596aec340ec363521f9874db3be8db2f6053bfd6c9a48cd28df61c415acc 3 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0a76596aec340ec363521f9874db3be8db2f6053bfd6c9a48cd28df61c415acc 3 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0a76596aec340ec363521f9874db3be8db2f6053bfd6c9a48cd28df61c415acc 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uhS 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uhS 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uhS 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f48fdfae68f9d3f60099e00f2b1c2a8d69ae62b8429e60b0 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.efi 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f48fdfae68f9d3f60099e00f2b1c2a8d69ae62b8429e60b0 0 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f48fdfae68f9d3f60099e00f2b1c2a8d69ae62b8429e60b0 0 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f48fdfae68f9d3f60099e00f2b1c2a8d69ae62b8429e60b0 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.efi 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.efi 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.efi 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8b3adc6fb041b08cb9e4b4a6ddd15b0281a0b6a62e0b5958 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.X9j 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8b3adc6fb041b08cb9e4b4a6ddd15b0281a0b6a62e0b5958 2 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8b3adc6fb041b08cb9e4b4a6ddd15b0281a0b6a62e0b5958 2 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8b3adc6fb041b08cb9e4b4a6ddd15b0281a0b6a62e0b5958 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.X9j 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.X9j 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.X9j 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=54e05d430693aa6925ff62422b8f28ca 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iZL 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 54e05d430693aa6925ff62422b8f28ca 1 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 54e05d430693aa6925ff62422b8f28ca 1 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=54e05d430693aa6925ff62422b8f28ca 00:19:11.617 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iZL 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iZL 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.iZL 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=481f3fe3f821a34d7f35ca86229f3c9c 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dv9 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 481f3fe3f821a34d7f35ca86229f3c9c 1 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 481f3fe3f821a34d7f35ca86229f3c9c 1 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=481f3fe3f821a34d7f35ca86229f3c9c 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dv9 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dv9 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.dv9 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1e895cf670c56dd4e95a221f28ae5be0017b6e580191bc83 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mW3 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1e895cf670c56dd4e95a221f28ae5be0017b6e580191bc83 2 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1e895cf670c56dd4e95a221f28ae5be0017b6e580191bc83 2 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1e895cf670c56dd4e95a221f28ae5be0017b6e580191bc83 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mW3 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mW3 00:19:11.876 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.mW3 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4e6dcbc85340096543ea2dfaa5954de4 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nMo 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4e6dcbc85340096543ea2dfaa5954de4 0 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4e6dcbc85340096543ea2dfaa5954de4 0 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4e6dcbc85340096543ea2dfaa5954de4 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nMo 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nMo 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nMo 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d318311504f3c9d91eefef7b5979df68eb8518222f450d66be12cb8ffdfd1c93 00:19:11.877 15:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:11.877 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dzn 00:19:11.877 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d318311504f3c9d91eefef7b5979df68eb8518222f450d66be12cb8ffdfd1c93 3 00:19:11.877 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d318311504f3c9d91eefef7b5979df68eb8518222f450d66be12cb8ffdfd1c93 3 00:19:11.877 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.877 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.877 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d318311504f3c9d91eefef7b5979df68eb8518222f450d66be12cb8ffdfd1c93 00:19:11.877 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:11.877 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dzn 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dzn 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.dzn 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91032 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91032 ']' 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.135 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wtb 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uhS ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uhS 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.efi 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.X9j ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.X9j 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.iZL 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.dv9 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dv9 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.mW3 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nMo ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nMo 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.dzn 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:12.394 15:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:12.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:12.913 Waiting for block devices as requested 00:19:12.913 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:12.913 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:13.481 No valid GPT data, bailing 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:13.481 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:13.740 No valid GPT data, bailing 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:13.741 No valid GPT data, bailing 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:13.741 No valid GPT data, bailing 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:13.741 15:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -a 10.0.0.1 -t tcp -s 4420 00:19:14.000 00:19:14.000 Discovery Log Number of Records 2, Generation counter 2 00:19:14.000 =====Discovery Log Entry 0====== 00:19:14.000 trtype: tcp 00:19:14.000 adrfam: ipv4 00:19:14.000 subtype: current discovery subsystem 00:19:14.000 treq: not specified, sq flow control disable supported 00:19:14.000 portid: 1 00:19:14.000 trsvcid: 4420 00:19:14.000 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:14.000 traddr: 10.0.0.1 00:19:14.000 eflags: none 00:19:14.000 sectype: none 00:19:14.000 =====Discovery Log Entry 1====== 00:19:14.000 trtype: tcp 00:19:14.000 adrfam: ipv4 00:19:14.000 subtype: nvme subsystem 00:19:14.000 treq: not specified, sq flow control disable supported 00:19:14.000 portid: 1 00:19:14.000 trsvcid: 4420 00:19:14.000 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:14.000 traddr: 10.0.0.1 00:19:14.000 eflags: none 00:19:14.000 sectype: none 00:19:14.000 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:14.000 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:14.000 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:14.000 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:14.000 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.000 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.000 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.000 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:14.000 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.001 15:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.001 nvme0n1 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.001 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.260 nvme0n1 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.260 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.261 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.520 nvme0n1 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.520 nvme0n1 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.520 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.779 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.780 nvme0n1 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.780 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.039 nvme0n1 00:19:15.039 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.039 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.039 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.039 15:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.039 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.039 15:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.039 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.298 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.558 nvme0n1 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.558 nvme0n1 00:19:15.558 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.817 nvme0n1 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.817 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:16.076 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.077 15:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.077 nvme0n1 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.077 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.335 nvme0n1 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:16.335 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:16.336 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:16.336 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.336 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:16.902 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:16.902 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:16.902 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:16.902 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:16.902 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.902 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.902 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.903 15:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.161 nvme0n1 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.161 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.421 nvme0n1 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.421 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.680 nvme0n1 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.680 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.681 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.939 nvme0n1 00:19:17.939 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.939 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.939 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.939 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.939 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.939 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.939 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.940 15:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.199 nvme0n1 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.199 15:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.099 15:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.099 nvme0n1 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.099 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.358 nvme0n1 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.358 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.925 nvme0n1 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.925 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.926 15:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.185 nvme0n1 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:21.185 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.211 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.470 nvme0n1 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:21.470 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.471 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.730 15:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.989 nvme0n1 00:19:21.989 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.989 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.989 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.989 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.989 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.989 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.248 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 nvme0n1 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.816 15:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.384 nvme0n1 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.384 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.951 nvme0n1 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.952 15:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.520 nvme0n1 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:24.520 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.521 nvme0n1 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.521 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:24.780 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.781 nvme0n1 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.781 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.041 nvme0n1 00:19:25.041 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.041 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.041 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.041 15:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.041 15:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.041 nvme0n1 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.041 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.301 nvme0n1 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.301 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.561 nvme0n1 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.561 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.821 nvme0n1 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.821 nvme0n1 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.821 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.081 15:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.081 nvme0n1 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.081 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.341 nvme0n1 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.341 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.600 nvme0n1 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.600 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.601 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.601 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.601 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.601 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.601 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.601 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.601 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.601 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.601 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.859 nvme0n1 00:19:26.859 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.860 15:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.135 nvme0n1 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.135 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.405 nvme0n1 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.405 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.406 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.664 nvme0n1 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.665 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.924 nvme0n1 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.924 15:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.924 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.183 nvme0n1 00:19:28.183 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.183 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.183 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.183 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.183 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.441 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.699 nvme0n1 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:28.699 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.700 15:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.958 nvme0n1 00:19:28.958 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.958 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.958 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.958 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.958 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.958 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.216 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.475 nvme0n1 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.475 15:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.041 nvme0n1 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.041 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.607 nvme0n1 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.607 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.608 15:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.173 nvme0n1 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.173 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.174 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.742 nvme0n1 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.742 15:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 nvme0n1 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.309 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.310 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:32.310 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.310 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:32.310 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:32.310 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:32.310 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.310 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.310 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.568 nvme0n1 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:32.568 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.569 nvme0n1 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.569 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.828 nvme0n1 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.828 15:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.087 nvme0n1 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.087 nvme0n1 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.087 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.345 nvme0n1 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.345 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.604 nvme0n1 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.604 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.863 nvme0n1 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.863 15:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.122 nvme0n1 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.122 nvme0n1 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.122 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.381 nvme0n1 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.381 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.639 nvme0n1 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.639 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.640 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.640 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.640 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.640 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.640 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.640 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.898 nvme0n1 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.898 15:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:34.898 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:35.155 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.156 nvme0n1 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.156 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:35.412 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.413 nvme0n1 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.413 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.669 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.669 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.669 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:35.669 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:35.669 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:35.669 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.670 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.670 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:35.670 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.670 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:35.670 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:35.670 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:35.670 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.670 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.670 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.928 nvme0n1 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.928 15:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.186 nvme0n1 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.187 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.445 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.707 nvme0n1 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.707 15:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.965 nvme0n1 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.965 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.223 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.481 nvme0n1 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWUwOTczZDk1NDUwOTNjNDUwYTM2ZDliYTYwN2NiOWKgEf3m: 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: ]] 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGE3NjU5NmFlYzM0MGVjMzYzNTIxZjk4NzRkYjNiZThkYjJmNjA1M2JmZDZjOWE0OGNkMjhkZjYxYzQxNWFjY+MeYIM=: 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.481 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.047 nvme0n1 00:19:38.047 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.047 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.047 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.047 15:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.047 15:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.047 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.613 nvme0n1 00:19:38.613 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.613 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.613 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.613 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.613 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRlMDVkNDMwNjkzYWE2OTI1ZmY2MjQyMmI4ZjI4Y2EJ8riD: 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: ]] 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDgxZjNmZTNmODIxYTM0ZDdmMzVjYTg2MjI5ZjNjOWNA3kfu: 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.614 15:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.180 nvme0n1 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWU4OTVjZjY3MGM1NmRkNGU5NWEyMjFmMjhhZTViZTAwMTdiNmU1ODAxOTFiYzgztZJxWg==: 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: ]] 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGU2ZGNiYzg1MzQwMDk2NTQzZWEyZGZhYTU5NTRkZTTzFIpv: 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.180 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.745 nvme0n1 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDMxODMxMTUwNGYzYzlkOTFlZWZlZjdiNTk3OWRmNjhlYjg1MTgyMjJmNDUwZDY2YmUxMmNiOGZmZGZkMWM5M6v5Vt4=: 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.745 15:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.313 nvme0n1 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjQ4ZmRmYWU2OGY5ZDNmNjAwOTllMDBmMmIxYzJhOGQ2OWFlNjJiODQyOWU2MGIwmAPV4g==: 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: ]] 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGIzYWRjNmZiMDQxYjA4Y2I5ZTRiNGE2ZGRkMTViMDI4MWEwYjZhNjJlMGI1OTU49OsnBA==: 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.313 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.572 2024/07/15 15:42:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:40.572 request: 00:19:40.572 { 00:19:40.572 "method": "bdev_nvme_attach_controller", 00:19:40.572 "params": { 00:19:40.572 "name": "nvme0", 00:19:40.572 "trtype": "tcp", 00:19:40.572 "traddr": "10.0.0.1", 00:19:40.572 "adrfam": "ipv4", 00:19:40.572 "trsvcid": "4420", 00:19:40.572 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:40.572 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:40.572 "prchk_reftag": false, 00:19:40.572 "prchk_guard": false, 00:19:40.572 "hdgst": false, 00:19:40.572 "ddgst": false 00:19:40.572 } 00:19:40.572 } 00:19:40.572 Got JSON-RPC error response 00:19:40.572 GoRPCClient: error on JSON-RPC call 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.572 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.573 2024/07/15 15:42:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:40.573 request: 00:19:40.573 { 00:19:40.573 "method": "bdev_nvme_attach_controller", 00:19:40.573 "params": { 00:19:40.573 "name": "nvme0", 00:19:40.573 "trtype": "tcp", 00:19:40.573 "traddr": "10.0.0.1", 00:19:40.573 "adrfam": "ipv4", 00:19:40.573 "trsvcid": "4420", 00:19:40.573 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:40.573 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:40.573 "prchk_reftag": false, 00:19:40.573 "prchk_guard": false, 00:19:40.573 "hdgst": false, 00:19:40.573 "ddgst": false, 00:19:40.573 "dhchap_key": "key2" 00:19:40.573 } 00:19:40.573 } 00:19:40.573 Got JSON-RPC error response 00:19:40.573 GoRPCClient: error on JSON-RPC call 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.573 2024/07/15 15:42:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:40.573 request: 00:19:40.573 { 00:19:40.573 "method": "bdev_nvme_attach_controller", 00:19:40.573 "params": { 00:19:40.573 "name": "nvme0", 00:19:40.573 "trtype": "tcp", 00:19:40.573 "traddr": "10.0.0.1", 00:19:40.573 "adrfam": "ipv4", 00:19:40.573 "trsvcid": "4420", 00:19:40.573 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:40.573 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:40.573 "prchk_reftag": false, 00:19:40.573 "prchk_guard": false, 00:19:40.573 "hdgst": false, 00:19:40.573 "ddgst": false, 00:19:40.573 "dhchap_key": "key1", 00:19:40.573 "dhchap_ctrlr_key": "ckey2" 00:19:40.573 } 00:19:40.573 } 00:19:40.573 Got JSON-RPC error response 00:19:40.573 GoRPCClient: error on JSON-RPC call 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.573 rmmod nvme_tcp 00:19:40.573 rmmod nvme_fabrics 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91032 ']' 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91032 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91032 ']' 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91032 00:19:40.573 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91032 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.833 killing process with pid 91032 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91032' 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91032 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91032 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:40.833 15:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:41.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.770 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:41.770 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:41.770 15:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.wtb /tmp/spdk.key-null.efi /tmp/spdk.key-sha256.iZL /tmp/spdk.key-sha384.mW3 /tmp/spdk.key-sha512.dzn /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:41.770 15:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:42.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.291 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:42.291 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:42.291 00:19:42.291 real 0m32.353s 00:19:42.291 user 0m30.101s 00:19:42.291 sys 0m3.599s 00:19:42.291 15:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:42.291 15:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.291 ************************************ 00:19:42.291 END TEST nvmf_auth_host 00:19:42.291 ************************************ 00:19:42.291 15:42:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:42.291 15:42:37 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:19:42.291 15:42:37 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:42.291 15:42:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:42.291 15:42:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.291 15:42:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:42.291 ************************************ 00:19:42.291 START TEST nvmf_digest 00:19:42.291 ************************************ 00:19:42.291 15:42:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:42.291 * Looking for test storage... 00:19:42.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:42.291 15:42:37 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.291 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:42.291 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.291 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.291 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:42.292 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:42.551 Cannot find device "nvmf_tgt_br" 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.551 Cannot find device "nvmf_tgt_br2" 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:42.551 Cannot find device "nvmf_tgt_br" 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:42.551 Cannot find device "nvmf_tgt_br2" 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:42.551 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:42.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:42.811 00:19:42.811 --- 10.0.0.2 ping statistics --- 00:19:42.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.811 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:42.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:42.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:42.811 00:19:42.811 --- 10.0.0.3 ping statistics --- 00:19:42.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.811 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:42.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:42.811 00:19:42.811 --- 10.0.0.1 ping statistics --- 00:19:42.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.811 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:42.811 ************************************ 00:19:42.811 START TEST nvmf_digest_clean 00:19:42.811 ************************************ 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=92596 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 92596 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92596 ']' 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.811 15:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:42.811 [2024-07-15 15:42:37.831750] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:42.811 [2024-07-15 15:42:37.831838] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.069 [2024-07-15 15:42:37.973890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.069 [2024-07-15 15:42:38.043187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.069 [2024-07-15 15:42:38.043246] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.069 [2024-07-15 15:42:38.043261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.069 [2024-07-15 15:42:38.043271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.069 [2024-07-15 15:42:38.043280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.069 [2024-07-15 15:42:38.043316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:44.005 null0 00:19:44.005 [2024-07-15 15:42:38.941658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.005 [2024-07-15 15:42:38.965738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92646 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92646 /var/tmp/bperf.sock 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92646 ']' 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:44.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.005 15:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:44.005 [2024-07-15 15:42:39.030056] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:44.005 [2024-07-15 15:42:39.030145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92646 ] 00:19:44.265 [2024-07-15 15:42:39.167647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.265 [2024-07-15 15:42:39.236078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.833 15:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.833 15:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:44.833 15:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:44.833 15:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:44.833 15:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:45.400 15:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:45.400 15:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:45.658 nvme0n1 00:19:45.658 15:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:45.658 15:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:45.658 Running I/O for 2 seconds... 00:19:47.561 00:19:47.561 Latency(us) 00:19:47.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.561 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:47.561 nvme0n1 : 2.00 22822.79 89.15 0.00 0.00 5602.54 3083.17 14834.97 00:19:47.561 =================================================================================================================== 00:19:47.561 Total : 22822.79 89.15 0.00 0.00 5602.54 3083.17 14834.97 00:19:47.561 0 00:19:47.820 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:47.820 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:47.820 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:47.820 | select(.opcode=="crc32c") 00:19:47.820 | "\(.module_name) \(.executed)"' 00:19:47.820 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:47.820 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92646 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92646 ']' 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92646 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92646 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:48.079 killing process with pid 92646 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92646' 00:19:48.079 Received shutdown signal, test time was about 2.000000 seconds 00:19:48.079 00:19:48.079 Latency(us) 00:19:48.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.079 =================================================================================================================== 00:19:48.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92646 00:19:48.079 15:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92646 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92732 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92732 /var/tmp/bperf.sock 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92732 ']' 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.079 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:48.079 [2024-07-15 15:42:43.188343] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:48.079 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:48.079 Zero copy mechanism will not be used. 00:19:48.079 [2024-07-15 15:42:43.189119] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92732 ] 00:19:48.337 [2024-07-15 15:42:43.325716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.337 [2024-07-15 15:42:43.375445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.337 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.337 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:48.337 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:48.337 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:48.337 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:48.595 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:48.595 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:48.853 nvme0n1 00:19:48.853 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:48.853 15:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:49.112 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:49.112 Zero copy mechanism will not be used. 00:19:49.112 Running I/O for 2 seconds... 00:19:51.017 00:19:51.017 Latency(us) 00:19:51.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.017 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:51.017 nvme0n1 : 2.00 9737.05 1217.13 0.00 0.00 1640.01 498.97 6345.08 00:19:51.017 =================================================================================================================== 00:19:51.017 Total : 9737.05 1217.13 0.00 0.00 1640.01 498.97 6345.08 00:19:51.017 0 00:19:51.017 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:51.017 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:51.017 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:51.017 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:51.017 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:51.017 | select(.opcode=="crc32c") 00:19:51.017 | "\(.module_name) \(.executed)"' 00:19:51.276 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92732 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92732 ']' 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92732 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92732 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:51.277 killing process with pid 92732 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92732' 00:19:51.277 Received shutdown signal, test time was about 2.000000 seconds 00:19:51.277 00:19:51.277 Latency(us) 00:19:51.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.277 =================================================================================================================== 00:19:51.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92732 00:19:51.277 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92732 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92803 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92803 /var/tmp/bperf.sock 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92803 ']' 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.536 15:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:51.536 [2024-07-15 15:42:46.541136] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:51.536 [2024-07-15 15:42:46.541239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92803 ] 00:19:51.794 [2024-07-15 15:42:46.681402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.794 [2024-07-15 15:42:46.740743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.362 15:42:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.362 15:42:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:52.362 15:42:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:52.362 15:42:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:52.362 15:42:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:52.621 15:42:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:52.621 15:42:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:52.880 nvme0n1 00:19:52.880 15:42:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:52.880 15:42:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:53.139 Running I/O for 2 seconds... 00:19:55.042 00:19:55.042 Latency(us) 00:19:55.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.042 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:55.043 nvme0n1 : 2.01 27167.57 106.12 0.00 0.00 4706.91 2115.03 8460.10 00:19:55.043 =================================================================================================================== 00:19:55.043 Total : 27167.57 106.12 0.00 0.00 4706.91 2115.03 8460.10 00:19:55.043 0 00:19:55.043 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:55.043 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:55.043 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:55.043 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:55.043 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:55.043 | select(.opcode=="crc32c") 00:19:55.043 | "\(.module_name) \(.executed)"' 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92803 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92803 ']' 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92803 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92803 00:19:55.302 killing process with pid 92803 00:19:55.302 Received shutdown signal, test time was about 2.000000 seconds 00:19:55.302 00:19:55.302 Latency(us) 00:19:55.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.302 =================================================================================================================== 00:19:55.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92803' 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92803 00:19:55.302 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92803 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92888 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92888 /var/tmp/bperf.sock 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92888 ']' 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:55.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.561 15:42:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:55.561 [2024-07-15 15:42:50.555949] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:55.561 [2024-07-15 15:42:50.556225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:19:55.561 Zero copy mechanism will not be used. 00:19:55.561 llocations --file-prefix=spdk_pid92888 ] 00:19:55.820 [2024-07-15 15:42:50.692987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.820 [2024-07-15 15:42:50.749007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.419 15:42:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.419 15:42:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:56.419 15:42:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:56.419 15:42:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:56.419 15:42:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:56.687 15:42:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.687 15:42:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.946 nvme0n1 00:19:56.946 15:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:56.946 15:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:57.205 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:57.205 Zero copy mechanism will not be used. 00:19:57.205 Running I/O for 2 seconds... 00:19:59.124 00:19:59.124 Latency(us) 00:19:59.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.124 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:59.124 nvme0n1 : 2.00 7524.79 940.60 0.00 0.00 2121.58 1630.95 9532.51 00:19:59.124 =================================================================================================================== 00:19:59.124 Total : 7524.79 940.60 0.00 0.00 2121.58 1630.95 9532.51 00:19:59.124 0 00:19:59.124 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:59.124 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:59.124 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:59.124 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:59.124 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:59.124 | select(.opcode=="crc32c") 00:19:59.124 | "\(.module_name) \(.executed)"' 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92888 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92888 ']' 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92888 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92888 00:19:59.404 killing process with pid 92888 00:19:59.404 Received shutdown signal, test time was about 2.000000 seconds 00:19:59.404 00:19:59.404 Latency(us) 00:19:59.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.404 =================================================================================================================== 00:19:59.404 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92888' 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92888 00:19:59.404 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92888 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92596 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92596 ']' 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92596 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92596 00:19:59.662 killing process with pid 92596 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92596' 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92596 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92596 00:19:59.662 ************************************ 00:19:59.662 END TEST nvmf_digest_clean 00:19:59.662 ************************************ 00:19:59.662 00:19:59.662 real 0m16.993s 00:19:59.662 user 0m32.240s 00:19:59.662 sys 0m4.174s 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.662 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:59.921 ************************************ 00:19:59.921 START TEST nvmf_digest_error 00:19:59.921 ************************************ 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93007 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93007 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93007 ']' 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:59.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.921 15:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:59.921 [2024-07-15 15:42:54.871309] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:59.921 [2024-07-15 15:42:54.871396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.921 [2024-07-15 15:42:55.010908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.180 [2024-07-15 15:42:55.060639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.180 [2024-07-15 15:42:55.060706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.180 [2024-07-15 15:42:55.060730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.180 [2024-07-15 15:42:55.060738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.180 [2024-07-15 15:42:55.060744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.180 [2024-07-15 15:42:55.060769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:00.180 [2024-07-15 15:42:55.133131] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:00.180 null0 00:20:00.180 [2024-07-15 15:42:55.198673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.180 [2024-07-15 15:42:55.222713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93032 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93032 /var/tmp/bperf.sock 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93032 ']' 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:00.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.180 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:00.180 [2024-07-15 15:42:55.287353] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:00.180 [2024-07-15 15:42:55.287462] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93032 ] 00:20:00.438 [2024-07-15 15:42:55.425953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.438 [2024-07-15 15:42:55.475280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.438 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.438 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:00.438 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:00.438 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:00.696 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:00.696 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.696 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:00.696 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.696 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:00.696 15:42:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:00.954 nvme0n1 00:20:00.954 15:42:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:00.954 15:42:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.954 15:42:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:00.954 15:42:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.954 15:42:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:00.954 15:42:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:01.213 Running I/O for 2 seconds... 00:20:01.213 [2024-07-15 15:42:56.183209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.183265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.183278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.195094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.195159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.195186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.206907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.206955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.206966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.217747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.217793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.217805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.229444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.229487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.229499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.239827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.239872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.239884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.250147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.250192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.250204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.262072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.262118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.262129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.273350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.273396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.273407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.284795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.284840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.284851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.295280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.295325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.295336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.306321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.306366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.306378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.317829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.317875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.317886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.330117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.330147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.330158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.213 [2024-07-15 15:42:56.340089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.213 [2024-07-15 15:42:56.340137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.213 [2024-07-15 15:42:56.340149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.352770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.352815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.352827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.363318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.363362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.363373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.375292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.375337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.375348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.386263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.386308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.386319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.399008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.399056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.399069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.409139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.409183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.409195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.421581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.421625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.421637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.432985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.433029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.433041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.444958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.445001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.445012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.455406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.455450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.455461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.466042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.466086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.466097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.477829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.477874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.477886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.489423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.489468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.489479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.499229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.499273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.499284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.513220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.513265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.513276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.522471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.522516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.522527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.534633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.534677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.534688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.545385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.545429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.557474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.557518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.557545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.567530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.567582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.567593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.579140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.579185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.579212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.590528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.590583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.590594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.474 [2024-07-15 15:42:56.603276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.474 [2024-07-15 15:42:56.603321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.474 [2024-07-15 15:42:56.603349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.613985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.614029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.614040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.625737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.625781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.625793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.637834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.637878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.637889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.648948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.648993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.649004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.660875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.660920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.660931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.671980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.672024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.672035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.682103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.682148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.682159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.693503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.693557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.693568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.705433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.705479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.705490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.716885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.716929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.716940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.726254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.726298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.726310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.738286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.738331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.738342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.749242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.749286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.749297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.760430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.760475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.735 [2024-07-15 15:42:56.760486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.735 [2024-07-15 15:42:56.771561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.735 [2024-07-15 15:42:56.771614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.736 [2024-07-15 15:42:56.771625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.736 [2024-07-15 15:42:56.783335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.736 [2024-07-15 15:42:56.783379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.736 [2024-07-15 15:42:56.783390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.736 [2024-07-15 15:42:56.794985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.736 [2024-07-15 15:42:56.795034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.736 [2024-07-15 15:42:56.795046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.736 [2024-07-15 15:42:56.804665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.736 [2024-07-15 15:42:56.804708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.736 [2024-07-15 15:42:56.804720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.736 [2024-07-15 15:42:56.816771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.736 [2024-07-15 15:42:56.816815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.736 [2024-07-15 15:42:56.816826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.736 [2024-07-15 15:42:56.828614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.736 [2024-07-15 15:42:56.828651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.736 [2024-07-15 15:42:56.828677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.736 [2024-07-15 15:42:56.841429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.736 [2024-07-15 15:42:56.841474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.736 [2024-07-15 15:42:56.841485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.736 [2024-07-15 15:42:56.853828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.736 [2024-07-15 15:42:56.853861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.736 [2024-07-15 15:42:56.853890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.995 [2024-07-15 15:42:56.867797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.995 [2024-07-15 15:42:56.867827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.867838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.881848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.881892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.881903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.891678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.891721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.891732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.903091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.903152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.903164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.914259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.914304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.914315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.925843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.925887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.925898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.936907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.936951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.936963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.946172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.946215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.946226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.957252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.957296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.957307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.971061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.971108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.971135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.981075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.981119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.981130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:56.991431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:56.991476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:56.991486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.003508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.003562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.003573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.014679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.014725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.014737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.025767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.025799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.025811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.039706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.039753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.039766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.052021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.052067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.052079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.063419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.063464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.063492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.075556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.075611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.075623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.087741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.087787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.087799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.098666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.098710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.098721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.111100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.111176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.111188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.996 [2024-07-15 15:42:57.123832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:01.996 [2024-07-15 15:42:57.123877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.996 [2024-07-15 15:42:57.123888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.136345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.136390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.136402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.148106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.148150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.148161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.160324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.160369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.160381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.172110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.172155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.172166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.181939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.181983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.181994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.193471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.193516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.193528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.207957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.208002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.208013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.220255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.220301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.220313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.230032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.230077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.230088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.242436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.242482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.242493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.254215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.254259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.254270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.265060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.265104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.265115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.276842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.276886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.276896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.286318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.286363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.286374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.298333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.298377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.298389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.310103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.310147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.310158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.320053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.320097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.256 [2024-07-15 15:42:57.320109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.256 [2024-07-15 15:42:57.331987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.256 [2024-07-15 15:42:57.332031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.257 [2024-07-15 15:42:57.332042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.257 [2024-07-15 15:42:57.344255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.257 [2024-07-15 15:42:57.344300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.257 [2024-07-15 15:42:57.344311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.257 [2024-07-15 15:42:57.356095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.257 [2024-07-15 15:42:57.356139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.257 [2024-07-15 15:42:57.356149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.257 [2024-07-15 15:42:57.366758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.257 [2024-07-15 15:42:57.366825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.257 [2024-07-15 15:42:57.366852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.257 [2024-07-15 15:42:57.378415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.257 [2024-07-15 15:42:57.378459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.257 [2024-07-15 15:42:57.378470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.390156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.390203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.390244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.401687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.401730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.401741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.414959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.415007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.415019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.425326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.425371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.425382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.436406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.436450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.436461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.448745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.448777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.448789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.459010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.459058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.459070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.470061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.470106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.470117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.481406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.481450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.481461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.493346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.493390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.493401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.503195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.503239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.503250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.514112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.514156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.514167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.526593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.526638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.526650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.536622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.536666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.536677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.549090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.549134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.549146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.559186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.559230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.559241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.571204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.571247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.571258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.582719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.582763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.582774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.593069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.593113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.593124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.605107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.605152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.605163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.617031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.617075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.617086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.626453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.626498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.626509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.517 [2024-07-15 15:42:57.639703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.517 [2024-07-15 15:42:57.639746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.517 [2024-07-15 15:42:57.639757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.652003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.652047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.652058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.662961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.663009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.663021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.673202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.673247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.673258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.685692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.685737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.685748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.696828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.696871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.696883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.708390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.708435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.708447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.719700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.719745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.719756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.729637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.729683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.729694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.741455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.741499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.741510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.752343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.752388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.777 [2024-07-15 15:42:57.752398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.777 [2024-07-15 15:42:57.762210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.777 [2024-07-15 15:42:57.762254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.762265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.773158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.773201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.773213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.784025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.784068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.784080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.793806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.793850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.793861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.805212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.805256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.805268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.816936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.816981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.829258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.829303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.829314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.840640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.840684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.840695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.850127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.850171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.850182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.862416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.862462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.862475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.876249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.876293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.876304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.888213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.888257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.888268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.778 [2024-07-15 15:42:57.898659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:02.778 [2024-07-15 15:42:57.898703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.778 [2024-07-15 15:42:57.898713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.037 [2024-07-15 15:42:57.912240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.037 [2024-07-15 15:42:57.912285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-07-15 15:42:57.912296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.037 [2024-07-15 15:42:57.923804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.037 [2024-07-15 15:42:57.923849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-07-15 15:42:57.923860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.037 [2024-07-15 15:42:57.935014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.037 [2024-07-15 15:42:57.935061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-07-15 15:42:57.935073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.037 [2024-07-15 15:42:57.946445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.037 [2024-07-15 15:42:57.946489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-07-15 15:42:57.946500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.037 [2024-07-15 15:42:57.956060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.037 [2024-07-15 15:42:57.956104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-07-15 15:42:57.956115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.037 [2024-07-15 15:42:57.967272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.037 [2024-07-15 15:42:57.967317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-07-15 15:42:57.967328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.037 [2024-07-15 15:42:57.978171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.037 [2024-07-15 15:42:57.978215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-07-15 15:42:57.978226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.037 [2024-07-15 15:42:57.989715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.037 [2024-07-15 15:42:57.989745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.037 [2024-07-15 15:42:57.989756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.037 [2024-07-15 15:42:58.000797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.000842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.000853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.012618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.012664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.012676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.022385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.022429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.022440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.035339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.035383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.035394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.045402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.045448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.045459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.056414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.056458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.056469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.068565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.068609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.068620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.080106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.080149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.080160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.092566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.092610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.092621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.104021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.104065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.104076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.113341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.113387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.113398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.125143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.125186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.125197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.136419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.136463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.136474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.148811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.148856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.148867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.038 [2024-07-15 15:42:58.159596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69e3e0) 00:20:03.038 [2024-07-15 15:42:58.159639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.038 [2024-07-15 15:42:58.159650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.297 00:20:03.297 Latency(us) 00:20:03.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.297 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:03.297 nvme0n1 : 2.00 22273.65 87.01 0.00 0.00 5740.18 2651.23 15728.64 00:20:03.297 =================================================================================================================== 00:20:03.297 Total : 22273.65 87.01 0.00 0.00 5740.18 2651.23 15728.64 00:20:03.297 0 00:20:03.297 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:03.297 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:03.297 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:03.297 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:03.297 | .driver_specific 00:20:03.297 | .nvme_error 00:20:03.297 | .status_code 00:20:03.297 | .command_transient_transport_error' 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 174 > 0 )) 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93032 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93032 ']' 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93032 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93032 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:03.556 killing process with pid 93032 00:20:03.556 Received shutdown signal, test time was about 2.000000 seconds 00:20:03.556 00:20:03.556 Latency(us) 00:20:03.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.556 =================================================================================================================== 00:20:03.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93032' 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93032 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93032 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93103 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93103 /var/tmp/bperf.sock 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93103 ']' 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:03.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.556 15:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:03.556 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:03.556 Zero copy mechanism will not be used. 00:20:03.556 [2024-07-15 15:42:58.664252] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:03.556 [2024-07-15 15:42:58.664350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93103 ] 00:20:03.815 [2024-07-15 15:42:58.799210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.815 [2024-07-15 15:42:58.849316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:04.748 15:42:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:05.007 nvme0n1 00:20:05.007 15:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:05.007 15:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.007 15:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:05.007 15:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.007 15:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:05.007 15:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:05.268 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:05.268 Zero copy mechanism will not be used. 00:20:05.268 Running I/O for 2 seconds... 00:20:05.268 [2024-07-15 15:43:00.224065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.224122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.224135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.228162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.228209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.228221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.231360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.231405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.231416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.235223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.235269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.235280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.238353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.238399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.238410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.241991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.242037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.242048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.245244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.245289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.245299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.249057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.249102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.249113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.252662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.252706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.252717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.255524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.255593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.255605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.259455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.259500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.259511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.263948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.263994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.264006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.268458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.268504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.268515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.272685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.272732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.272743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.275160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.275219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.275230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.279539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.279593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.279605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.283948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.283993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.284004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.286756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.286823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.286850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.290177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.290222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.290233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.293753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.293785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.293797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.297364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.297410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.297420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.268 [2024-07-15 15:43:00.301298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.268 [2024-07-15 15:43:00.301344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-15 15:43:00.301355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.304305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.304351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.304361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.308326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.308371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.308381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.312129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.312174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.312185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.314954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.314988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.315001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.319335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.319381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.319391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.324031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.324077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.324088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.327090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.327153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.327165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.331194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.331241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.331252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.335343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.335388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.335399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.338314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.338359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.338369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.342208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.342253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.342265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.346089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.346134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.346145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.349413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.349458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.349468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.352409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.352454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.352464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.356235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.356280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.356291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.359408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.359453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.359464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.363192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.363224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.363235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.366841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.366889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.366901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.370479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.370523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.370560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.374199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.374244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.374255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.377700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.377746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.377757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.381198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.381242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.381253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.385333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.385379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.385390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.388900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.388945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.388956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.392242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.392287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.392298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.269 [2024-07-15 15:43:00.396423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.269 [2024-07-15 15:43:00.396469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-15 15:43:00.396480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.530 [2024-07-15 15:43:00.399823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.530 [2024-07-15 15:43:00.399867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-07-15 15:43:00.399877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.530 [2024-07-15 15:43:00.404443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.530 [2024-07-15 15:43:00.404488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-07-15 15:43:00.404499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.530 [2024-07-15 15:43:00.408417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.530 [2024-07-15 15:43:00.408462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-07-15 15:43:00.408473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.530 [2024-07-15 15:43:00.411662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.530 [2024-07-15 15:43:00.411706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-07-15 15:43:00.411717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.530 [2024-07-15 15:43:00.414849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.530 [2024-07-15 15:43:00.414881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-07-15 15:43:00.414893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.530 [2024-07-15 15:43:00.418412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.530 [2024-07-15 15:43:00.418444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-07-15 15:43:00.418455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.530 [2024-07-15 15:43:00.422031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.530 [2024-07-15 15:43:00.422063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-07-15 15:43:00.422074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.530 [2024-07-15 15:43:00.425379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.530 [2024-07-15 15:43:00.425424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-07-15 15:43:00.425435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.530 [2024-07-15 15:43:00.428761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.530 [2024-07-15 15:43:00.428805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.428816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.432598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.432626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.432636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.435867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.435911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.435922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.439523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.439577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.439589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.443761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.443806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.443817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.446704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.446748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.446758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.450156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.450201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.450212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.453605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.453648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.453659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.457585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.457629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.457640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.460385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.460429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.460440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.464435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.464481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.464492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.468220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.468266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.468277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.472253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.472298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.472309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.474935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.474981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.475008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.478602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.478647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.478658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.482136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.482182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.482193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.485463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.485494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.485505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.488939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.489000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.489011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.492323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.492368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.492379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.496382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.496413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.496424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.500380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.500425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.500436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.503498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.503553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.503565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.507605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.507650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.507660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.510496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.510549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.510561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.514248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.514293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.514304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.517267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.517312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.517323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.520863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.520909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.520936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.524577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.524623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.524634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.527727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.527772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-07-15 15:43:00.527783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.531 [2024-07-15 15:43:00.531183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.531 [2024-07-15 15:43:00.531228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.531239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.534953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.534985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.534997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.538661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.538707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.538718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.542268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.542313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.542324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.546044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.546089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.546100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.549137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.549182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.549193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.552815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.552860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.552871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.556990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.557036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.557047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.559977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.560023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.560033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.563667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.563714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.563725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.567418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.567463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.567474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.570742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.570813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.570842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.574530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.574574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.574584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.578279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.578324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.578335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.581626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.581671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.581682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.585933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.585979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.585990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.590181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.590226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.590237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.592939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.592982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.592993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.596639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.596682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.596693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.600586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.600630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.600642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.605199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.605245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.605256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.609214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.609259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.609270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.612159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.612203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.612215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.532 [2024-07-15 15:43:00.615770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.532 [2024-07-15 15:43:00.615816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-07-15 15:43:00.615827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.619312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.619357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.619369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.623007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.623040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.623051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.626583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.626628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.626639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.629982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.630027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.630038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.633941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.633987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.633998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.636822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.636865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.636876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.640741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.640772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.640783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.644862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.644907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.644918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.648021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.648066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.648077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.651372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.651416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.651427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.533 [2024-07-15 15:43:00.655019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.533 [2024-07-15 15:43:00.655082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-07-15 15:43:00.655124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.659225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.659256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.659297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.662261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.662305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.662316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.665987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.666034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.666046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.669478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.669510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.669532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.673043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.673088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.673099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.676237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.676282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.676292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.679914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.679959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.679971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.683808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.683852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.683863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.687409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.687453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.687464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.690482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.690529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.690551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.693643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.693689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.693701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.697710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.697758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.697770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.701832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.701879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.701905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.705642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.705704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.705717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.709990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.710037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.710049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.713412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.713458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.713469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.717627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.717674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.717690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.721330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.721376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.721387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.725206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.725253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.794 [2024-07-15 15:43:00.725264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.794 [2024-07-15 15:43:00.729212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.794 [2024-07-15 15:43:00.729257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.729269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.733412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.733457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.733469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.736900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.736947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.736972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.740597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.740642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.740654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.744613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.744651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.744664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.748737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.748783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.748794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.751077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.751121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.751133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.755362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.755407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.755418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.758609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.758653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.758665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.762043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.762088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.762099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.766017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.766063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.766074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.770561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.770606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.770617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.773732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.773776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.773787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.776918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.776950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.776961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.780509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.780565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.780577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.783946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.783991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.784002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.787680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.787726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.787737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.791635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.791680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.791692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.794271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.794316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.794327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.797974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.798020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.798032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.801397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.801430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.801441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.805072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.805118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.805129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.809222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.809268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.809279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.812100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.812144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.812156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.815890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.815936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.815947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.819128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.819190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.819216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.822629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.822693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.822704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.795 [2024-07-15 15:43:00.825984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.795 [2024-07-15 15:43:00.826030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.795 [2024-07-15 15:43:00.826042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.829280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.829326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.829337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.832767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.832796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.832807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.837211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.837258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.837269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.841643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.841689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.841700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.844859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.844903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.844915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.848705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.848747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.848759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.852941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.852986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.852997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.857496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.857552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.857565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.860722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.860767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.860779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.864905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.864952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.864963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.869561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.869608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.869619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.873762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.873807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.873818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.876767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.876812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.876823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.880613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.880658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.880670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.884336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.884381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.884392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.887755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.887800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.887812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.891766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.891811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.891822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.895369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.895414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.895426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.898553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.898598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.898609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.902008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.902054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.902065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.906558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.906614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.906626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.911048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.911095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.911121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.913867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.913926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.913937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.796 [2024-07-15 15:43:00.917787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:05.796 [2024-07-15 15:43:00.917833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.796 [2024-07-15 15:43:00.917845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.923499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.923575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.923588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.928464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.928527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.928585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.932138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.932184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.932195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.936511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.936585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.936597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.941268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.941313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.941324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.945248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.945293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.945304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.948892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.948968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.948979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.953169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.953215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.953226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.956707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.956753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.956764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.058 [2024-07-15 15:43:00.960838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.058 [2024-07-15 15:43:00.960883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.058 [2024-07-15 15:43:00.960894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.964185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.964230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.964240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.967667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.967713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.967724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.970904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.970936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.970948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.974127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.974171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.974181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.977270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.977315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.977326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.980291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.980336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.980347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.983901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.983946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.983957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.987259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.987304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.987314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.990555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.990601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.990612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.993811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.993855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.993867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:00.997362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:00.997407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:00.997417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.001087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.001133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.001144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.004474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.004519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.004529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.007652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.007695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.007706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.011217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.011263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.011273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.014980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.015012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.015023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.017975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.018021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.018033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.022546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.022590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.022601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.026135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.026180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.026191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.029896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.029941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.029952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.032662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.032706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.032716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.036695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.036740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.036751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.040228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.040273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.040283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.043069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.043147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.043173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.046551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.046595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.046605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.050442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.050487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.050497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.053083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.053127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.053138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.057175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.057221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.057232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.059 [2024-07-15 15:43:01.060075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.059 [2024-07-15 15:43:01.060118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.059 [2024-07-15 15:43:01.060129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.063702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.063746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.063757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.067774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.067819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.067829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.071239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.071283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.071294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.074211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.074255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.074266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.077837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.077880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.077891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.081686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.081732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.081743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.084840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.084886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.084898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.088189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.088235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.088245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.092100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.092144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.092155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.096827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.096873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.096885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.100132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.100177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.100188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.104010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.104056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.104067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.107877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.107909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.107935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.111500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.111572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.111584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.114679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.114724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.114736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.118414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.118459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.118471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.122160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.122205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.122216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.125558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.125604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.125615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.129250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.129295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.129306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.133332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.133377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.133389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.137237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.137282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.140495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.140550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.140562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.144250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.144296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.144308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.147513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.147583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.147595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.150996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.151030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.151042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.154684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.154730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.154741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.157901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.157947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.157958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.161845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.161891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.161902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.060 [2024-07-15 15:43:01.165375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.060 [2024-07-15 15:43:01.165420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.060 [2024-07-15 15:43:01.165431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.061 [2024-07-15 15:43:01.168985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.061 [2024-07-15 15:43:01.169030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.061 [2024-07-15 15:43:01.169040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.061 [2024-07-15 15:43:01.172164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.061 [2024-07-15 15:43:01.172209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.061 [2024-07-15 15:43:01.172220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.061 [2024-07-15 15:43:01.175378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.061 [2024-07-15 15:43:01.175422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.061 [2024-07-15 15:43:01.175433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.061 [2024-07-15 15:43:01.178854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.061 [2024-07-15 15:43:01.178888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.061 [2024-07-15 15:43:01.178902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.061 [2024-07-15 15:43:01.182232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.061 [2024-07-15 15:43:01.182279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.061 [2024-07-15 15:43:01.182291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.061 [2024-07-15 15:43:01.186397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.061 [2024-07-15 15:43:01.186443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.061 [2024-07-15 15:43:01.186454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.190128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.190174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.190186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.193609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.193666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.193678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.197602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.197646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.197657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.201397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.201442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.201453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.204727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.204771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.204782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.208304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.208349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.208360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.211975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.212020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.212031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.215800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.215846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.215857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.219017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.219065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.219076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.222753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.222821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.222850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.227278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.227323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.227340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.231363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.231408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.231419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.234347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.234391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.234401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.238300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.238346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.238357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.241253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.241297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.241308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.245141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.245186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.245197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.248507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.248560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.248571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.252474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.252519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.252530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.255890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.255934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.255945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.258726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.258770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.258781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.262296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.262341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.262352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.266080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.266125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.266136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.269141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.269186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.322 [2024-07-15 15:43:01.269197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.322 [2024-07-15 15:43:01.272439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.322 [2024-07-15 15:43:01.272484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.272494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.276065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.276109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.276120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.279283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.279327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.279338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.283191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.283236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.283246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.287086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.287148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.287159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.290315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.290359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.290370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.293661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.293706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.293718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.296762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.296806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.296817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.300472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.300517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.300528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.303744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.303789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.303800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.307555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.307610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.307621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.310867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.310902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.310915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.314048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.314092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.314103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.317769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.317814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.317825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.321519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.321572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.321583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.325182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.325227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.325238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.328698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.328743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.328754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.332565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.332608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.332619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.335970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.336015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.336026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.339834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.339879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.339890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.343436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.343480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.343491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.346359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.346403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.346413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.350297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.350341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.350352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.354174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.354219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.354230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.357594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.357624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.357634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.360503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.360546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.360557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.364534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.364588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.364599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.367482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.367526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.367549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.370819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.370853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.323 [2024-07-15 15:43:01.370866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.323 [2024-07-15 15:43:01.374768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.323 [2024-07-15 15:43:01.374837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.374849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.379023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.379086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.379114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.382870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.382908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.382922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.387563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.387619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.387650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.391789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.391825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.391837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.396985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.397035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.397048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.400590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.400635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.400646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.404287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.404332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.404343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.407552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.407606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.407617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.411018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.411064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.411090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.414606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.414651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.414663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.417628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.417674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.417684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.421468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.421513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.421524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.425383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.425428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.425439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.428436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.428481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.428491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.432207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.432253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.432264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.435859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.435904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.435915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.438991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.439036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.439047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.442334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.442379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.442389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.445591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.445634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.445645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.324 [2024-07-15 15:43:01.449984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.324 [2024-07-15 15:43:01.450029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.324 [2024-07-15 15:43:01.450040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.453428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.453473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.453483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.457020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.457081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.457109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.460826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.460871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.460882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.464363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.464407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.464419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.467570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.467624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.467635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.471516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.471572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.471584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.474981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.475015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.475027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.478410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.478454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.478465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.481842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.481885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.481896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.485760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.485804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.485815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.488849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.488893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.488904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.492143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.492188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.492199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.495846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.495890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.495901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.499202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.499248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.499258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.503194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.503254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.503265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.506569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.506615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.506626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.510362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.510407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.510418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.513351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.513395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.513406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.516961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.517006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.517017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.520686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.520732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.520743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.523656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.523700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.523711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.527568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.527621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.527633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.531524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.531577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.531589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.535925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.535971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.535982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.539096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.539159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.539200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.542723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.542768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.542779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.546340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.546385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.546396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.549057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.549101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.585 [2024-07-15 15:43:01.549112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.585 [2024-07-15 15:43:01.552746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.585 [2024-07-15 15:43:01.552791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.552801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.556164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.556209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.556220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.559663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.559708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.559719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.562777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.562847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.562859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.566566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.566609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.566620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.570653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.570685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.570696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.573352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.573396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.573406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.577340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.577386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.577397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.581566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.581611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.581622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.585153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.585197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.585207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.588236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.588280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.588291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.591909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.591970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.591981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.595718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.595764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.595775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.598725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.598770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.598782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.602236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.602281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.602292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.606021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.606067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.606078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.609362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.609407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.609417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.612900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.612945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.612956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.617085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.617131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.617142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.620444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.620490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.620501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.623283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.623328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.623339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.626876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.626909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.626920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.629583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.629626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.629637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.633272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.633317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.633328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.637206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.637251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.637262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.641110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.641155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.641166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.644465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.644510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.644521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.647698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.647743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.647754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.651563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.651618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.586 [2024-07-15 15:43:01.651630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.586 [2024-07-15 15:43:01.655251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.586 [2024-07-15 15:43:01.655295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.655306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.658739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.658790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.658833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.662283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.662327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.662338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.666097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.666143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.666154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.668751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.668795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.668806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.672452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.672497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.672507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.675870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.675915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.675927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.678929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.678961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.678972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.682704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.682736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.682747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.686375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.686408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.686419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.689289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.689318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.689345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.693478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.693523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.693559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.697593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.697638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.697650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.700687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.700732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.700743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.704096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.704141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.704151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.708273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.708318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.708329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.587 [2024-07-15 15:43:01.712935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.587 [2024-07-15 15:43:01.712981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.587 [2024-07-15 15:43:01.712991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.715833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.715877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.715887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.719709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.719768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.719779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.724031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.724077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.724088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.728209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.728254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.728266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.730956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.731001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.731012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.734364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.734410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.734421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.738044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.738088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.738098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.741208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.741252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.741263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.745122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.745167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.745177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.748743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.748788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.748799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.751818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.751862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.848 [2024-07-15 15:43:01.751873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.848 [2024-07-15 15:43:01.755672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.848 [2024-07-15 15:43:01.755717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.755727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.758696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.758740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.758751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.762613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.762658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.762668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.765894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.765938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.765949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.769366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.769411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.769421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.773299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.773343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.773354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.777092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.777137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.777148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.780386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.780431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.780441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.784147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.784192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.784202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.788100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.788146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.788157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.791730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.791776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.791787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.794628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.794672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.794683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.798439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.798484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.798495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.802121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.802165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.802176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.805397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.805442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.805453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.809279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.809325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.809336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.812744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.812789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.812801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.816208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.816253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.816264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.820163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.820208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.820219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.823304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.823348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.823359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.827420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.827465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.827476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.830999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.831046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.831057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.834179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.834223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.834234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.837766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.837810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.837821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.841275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.841320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.841330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.844807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.844853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.844865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.848460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.848504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.848515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.852230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.852275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.852286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.856206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.849 [2024-07-15 15:43:01.856250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.849 [2024-07-15 15:43:01.856261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.849 [2024-07-15 15:43:01.858582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.858625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.858635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.862736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.862781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.862814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.865797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.865844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.865854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.869567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.869611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.869621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.873660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.873705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.873715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.877708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.877753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.877764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.880795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.880841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.880852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.884675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.884721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.884732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.887842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.887886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.887897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.891935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.891980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.891991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.895759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.895805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.895815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.898952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.898983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.898994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.903271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.903317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.903328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.907802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.907846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.907857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.910647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.910690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.910701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.914949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.914997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.915009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.918086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.918133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.918144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.922083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.922129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.922140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.926195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.926256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.926268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.930142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.930188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.930199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.933548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.933617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.933631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.937908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.937985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.937996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.941804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.941835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.941847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.946079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.946125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.946144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.950039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.950085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.950096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.954000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.954047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.954059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.958319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.958368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.958380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.961765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.961801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.961814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.966107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.850 [2024-07-15 15:43:01.966154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.850 [2024-07-15 15:43:01.966165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.850 [2024-07-15 15:43:01.970044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.851 [2024-07-15 15:43:01.970090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.851 [2024-07-15 15:43:01.970102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:06.851 [2024-07-15 15:43:01.974955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:06.851 [2024-07-15 15:43:01.974993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.851 [2024-07-15 15:43:01.975007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.111 [2024-07-15 15:43:01.978071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.111 [2024-07-15 15:43:01.978116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.111 [2024-07-15 15:43:01.978128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.111 [2024-07-15 15:43:01.982191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.111 [2024-07-15 15:43:01.982237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.111 [2024-07-15 15:43:01.982247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.111 [2024-07-15 15:43:01.986078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.111 [2024-07-15 15:43:01.986124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:01.986135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:01.989937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:01.989983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:01.989994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:01.994154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:01.994202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:01.994213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:01.997095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:01.997140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:01.997151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.001072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.001116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.001127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.005433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.005480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.005491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.008409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.008455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.008466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.011852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.011898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.011909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.014963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.014997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.015009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.018618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.018663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.018675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.022624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.022656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.022667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.025559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.025591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.025602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.029044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.029090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.029101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.033137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.033183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.033194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.036912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.036957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.036968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.039730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.039775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.039786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.043889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.043934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.043945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.047241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.047285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.047296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.050779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.050850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.050862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.055205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.055251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.055262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.058142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.058170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.058182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.061946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.061978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.061989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.065338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.065385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.065396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.069084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.069129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.069140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.072866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.072911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.072923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.076370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.076416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.076427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.080615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.080661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.080672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.084374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.084420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.084430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.087401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.112 [2024-07-15 15:43:02.087446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.112 [2024-07-15 15:43:02.087457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.112 [2024-07-15 15:43:02.091202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.091247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.091258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.095113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.095189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.095214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.098231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.098278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.098289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.102269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.102315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.102326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.105460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.105505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.105517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.109428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.109459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.109470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.113323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.113369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.113381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.116906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.116953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.116964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.120701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.120747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.120758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.124798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.124828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.124840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.128077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.128122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.128133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.131699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.131731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.131742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.135245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.135289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.135300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.139063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.139095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.139107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.142097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.142142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.142152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.145759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.145806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.145817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.148934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.148978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.148989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.152363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.152408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.152419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.155850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.155895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.155906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.159402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.159447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.159457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.162924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.162971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.162982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.166391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.166436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.166446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.169552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.169597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.169608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.174093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.174138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.174149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.178426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.178471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.178482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.181493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.181561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.181574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.185087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.185132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.185143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.188600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.188645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.188655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.113 [2024-07-15 15:43:02.192653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.113 [2024-07-15 15:43:02.192698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.113 [2024-07-15 15:43:02.192709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.114 [2024-07-15 15:43:02.195742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.114 [2024-07-15 15:43:02.195785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.114 [2024-07-15 15:43:02.195796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.114 [2024-07-15 15:43:02.199471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.114 [2024-07-15 15:43:02.199515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.114 [2024-07-15 15:43:02.199525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.114 [2024-07-15 15:43:02.203227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.114 [2024-07-15 15:43:02.203273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.114 [2024-07-15 15:43:02.203283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.114 [2024-07-15 15:43:02.206634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.114 [2024-07-15 15:43:02.206679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.114 [2024-07-15 15:43:02.206690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.114 [2024-07-15 15:43:02.210019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.114 [2024-07-15 15:43:02.210063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.114 [2024-07-15 15:43:02.210074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.114 [2024-07-15 15:43:02.213908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x160a380) 00:20:07.114 [2024-07-15 15:43:02.213952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.114 [2024-07-15 15:43:02.213963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.114 00:20:07.114 Latency(us) 00:20:07.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.114 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:07.114 nvme0n1 : 2.00 8520.09 1065.01 0.00 0.00 1874.50 606.95 6851.49 00:20:07.114 =================================================================================================================== 00:20:07.114 Total : 8520.09 1065.01 0.00 0.00 1874.50 606.95 6851.49 00:20:07.114 0 00:20:07.114 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:07.114 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:07.114 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:07.114 | .driver_specific 00:20:07.114 | .nvme_error 00:20:07.114 | .status_code 00:20:07.114 | .command_transient_transport_error' 00:20:07.114 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 549 > 0 )) 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93103 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93103 ']' 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93103 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93103 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.682 killing process with pid 93103 00:20:07.682 Received shutdown signal, test time was about 2.000000 seconds 00:20:07.682 00:20:07.682 Latency(us) 00:20:07.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.682 =================================================================================================================== 00:20:07.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93103' 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93103 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93103 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93188 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93188 /var/tmp/bperf.sock 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93188 ']' 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:07.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.682 15:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:07.682 [2024-07-15 15:43:02.712965] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:07.682 [2024-07-15 15:43:02.713046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93188 ] 00:20:07.941 [2024-07-15 15:43:02.844682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.941 [2024-07-15 15:43:02.896679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.508 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.508 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:08.508 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:08.508 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:08.766 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:08.766 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.766 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:08.766 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.766 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:08.766 15:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:09.024 nvme0n1 00:20:09.283 15:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:09.283 15:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.283 15:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:09.283 15:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.283 15:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:09.283 15:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:09.283 Running I/O for 2 seconds... 00:20:09.283 [2024-07-15 15:43:04.300823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6458 00:20:09.283 [2024-07-15 15:43:04.301720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.283 [2024-07-15 15:43:04.301752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:09.283 [2024-07-15 15:43:04.311777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e95a0 00:20:09.283 [2024-07-15 15:43:04.312337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.283 [2024-07-15 15:43:04.312363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:09.283 [2024-07-15 15:43:04.325705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fd640 00:20:09.283 [2024-07-15 15:43:04.327655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.283 [2024-07-15 15:43:04.327702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:09.283 [2024-07-15 15:43:04.334049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e8d30 00:20:09.283 [2024-07-15 15:43:04.334812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.283 [2024-07-15 15:43:04.334842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:09.283 [2024-07-15 15:43:04.346088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f8e88 00:20:09.283 [2024-07-15 15:43:04.347029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.283 [2024-07-15 15:43:04.347061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:09.283 [2024-07-15 15:43:04.355693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ed4e8 00:20:09.283 [2024-07-15 15:43:04.356704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.283 [2024-07-15 15:43:04.356734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:09.283 [2024-07-15 15:43:04.365935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ed0b0 00:20:09.283 [2024-07-15 15:43:04.366960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.283 [2024-07-15 15:43:04.366990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:09.283 [2024-07-15 15:43:04.377582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fdeb0 00:20:09.284 [2024-07-15 15:43:04.379032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.284 [2024-07-15 15:43:04.379077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:09.284 [2024-07-15 15:43:04.388216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ec840 00:20:09.284 [2024-07-15 15:43:04.389827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.284 [2024-07-15 15:43:04.389868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:09.284 [2024-07-15 15:43:04.395820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e9168 00:20:09.284 [2024-07-15 15:43:04.396540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.284 [2024-07-15 15:43:04.396571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:09.284 [2024-07-15 15:43:04.407755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ef270 00:20:09.284 [2024-07-15 15:43:04.409107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.284 [2024-07-15 15:43:04.409149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.419493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e1f80 00:20:09.543 [2024-07-15 15:43:04.420399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.420429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.429327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f31b8 00:20:09.543 [2024-07-15 15:43:04.430610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.430648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.439402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e6300 00:20:09.543 [2024-07-15 15:43:04.440670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.440711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.448818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6cc8 00:20:09.543 [2024-07-15 15:43:04.449786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.449816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.458916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e7c50 00:20:09.543 [2024-07-15 15:43:04.459883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.459911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.471149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eb328 00:20:09.543 [2024-07-15 15:43:04.472704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.472746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.481731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190feb58 00:20:09.543 [2024-07-15 15:43:04.483474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.483517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.489423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e5220 00:20:09.543 [2024-07-15 15:43:04.490305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.490325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.501618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f8618 00:20:09.543 [2024-07-15 15:43:04.503042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.503085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.511047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e9168 00:20:09.543 [2024-07-15 15:43:04.512288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.512331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.521256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e23b8 00:20:09.543 [2024-07-15 15:43:04.522446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.522488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.533185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fcdd0 00:20:09.543 [2024-07-15 15:43:04.535042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.535071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.540437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fac10 00:20:09.543 [2024-07-15 15:43:04.541307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.541335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.552209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e27f0 00:20:09.543 [2024-07-15 15:43:04.553705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.553747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.561602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fbcf0 00:20:09.543 [2024-07-15 15:43:04.562862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.562890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.571162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ea680 00:20:09.543 [2024-07-15 15:43:04.572453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.572480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.581133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eaef0 00:20:09.543 [2024-07-15 15:43:04.582379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.582420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.589035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ef270 00:20:09.543 [2024-07-15 15:43:04.589709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.589733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.601643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190edd58 00:20:09.543 [2024-07-15 15:43:04.603367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.603409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.608786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e8088 00:20:09.543 [2024-07-15 15:43:04.609419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.609442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.621180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e6b70 00:20:09.543 [2024-07-15 15:43:04.622689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.622730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.628403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f7538 00:20:09.543 [2024-07-15 15:43:04.629255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.629278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.639886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190edd58 00:20:09.543 [2024-07-15 15:43:04.641258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.641299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.649711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fa7d8 00:20:09.543 [2024-07-15 15:43:04.651081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.651125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.659107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fe2e8 00:20:09.543 [2024-07-15 15:43:04.660401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.660442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:09.543 [2024-07-15 15:43:04.668663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190df118 00:20:09.543 [2024-07-15 15:43:04.670137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.543 [2024-07-15 15:43:04.670178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.679682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fc560 00:20:09.804 [2024-07-15 15:43:04.680476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.680502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.688677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f9b30 00:20:09.804 [2024-07-15 15:43:04.690275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.690317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.699619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f35f0 00:20:09.804 [2024-07-15 15:43:04.700836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.700877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.708912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f1430 00:20:09.804 [2024-07-15 15:43:04.710117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.710159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.717976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e6738 00:20:09.804 [2024-07-15 15:43:04.719002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.719031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.727599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f7970 00:20:09.804 [2024-07-15 15:43:04.728557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.728589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.738994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f2948 00:20:09.804 [2024-07-15 15:43:04.740569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.740636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.748776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6890 00:20:09.804 [2024-07-15 15:43:04.750305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.750346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.756842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f9b30 00:20:09.804 [2024-07-15 15:43:04.757574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.757606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.766487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f5378 00:20:09.804 [2024-07-15 15:43:04.767723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.767764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.776232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fa7d8 00:20:09.804 [2024-07-15 15:43:04.777386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.777426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.786117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f31b8 00:20:09.804 [2024-07-15 15:43:04.787020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.787048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.795634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fb8b8 00:20:09.804 [2024-07-15 15:43:04.796364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.796391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.805076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e38d0 00:20:09.804 [2024-07-15 15:43:04.806106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.806135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.814876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f4f40 00:20:09.804 [2024-07-15 15:43:04.815996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.816052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.826576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ea680 00:20:09.804 [2024-07-15 15:43:04.828160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.828201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.833598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6020 00:20:09.804 [2024-07-15 15:43:04.834368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.834406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.843384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fe720 00:20:09.804 [2024-07-15 15:43:04.844164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.844187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.854163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fd640 00:20:09.804 [2024-07-15 15:43:04.855413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.855454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.864015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fa3a0 00:20:09.804 [2024-07-15 15:43:04.865216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.865257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.872720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e8d30 00:20:09.804 [2024-07-15 15:43:04.873459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.873482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.884755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ec408 00:20:09.804 [2024-07-15 15:43:04.886355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.886396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.894274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fd640 00:20:09.804 [2024-07-15 15:43:04.895914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.895954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.901340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fdeb0 00:20:09.804 [2024-07-15 15:43:04.902150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.902173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.911528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ee5c8 00:20:09.804 [2024-07-15 15:43:04.912445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.912474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.921395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6890 00:20:09.804 [2024-07-15 15:43:04.922305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.922332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:09.804 [2024-07-15 15:43:04.931103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f4b08 00:20:09.804 [2024-07-15 15:43:04.932021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.804 [2024-07-15 15:43:04.932049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:04.941546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e38d0 00:20:10.064 [2024-07-15 15:43:04.942322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:04.942359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:04.951756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e0630 00:20:10.064 [2024-07-15 15:43:04.952512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:04.952573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:04.961196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ddc00 00:20:10.064 [2024-07-15 15:43:04.961872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:04.961910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:04.973066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e3498 00:20:10.064 [2024-07-15 15:43:04.974283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:04.974324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:04.982378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f46d0 00:20:10.064 [2024-07-15 15:43:04.983511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:04.983580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:04.991860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eea00 00:20:10.064 [2024-07-15 15:43:04.992855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:04.992884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:05.001269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ea680 00:20:10.064 [2024-07-15 15:43:05.002084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:05.002124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:05.012061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e5ec8 00:20:10.064 [2024-07-15 15:43:05.012602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:05.012652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:05.024251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190dece0 00:20:10.064 [2024-07-15 15:43:05.025110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:05.025136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:05.034815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fe2e8 00:20:10.064 [2024-07-15 15:43:05.035433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:05.035462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:10.064 [2024-07-15 15:43:05.046459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190dece0 00:20:10.064 [2024-07-15 15:43:05.047740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.064 [2024-07-15 15:43:05.047781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.055575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eaef0 00:20:10.065 [2024-07-15 15:43:05.056689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.056718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.065102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e38d0 00:20:10.065 [2024-07-15 15:43:05.066362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.066402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.074264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e2c28 00:20:10.065 [2024-07-15 15:43:05.075351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.075378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.083737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e7c50 00:20:10.065 [2024-07-15 15:43:05.084743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.084770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.093569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ddc00 00:20:10.065 [2024-07-15 15:43:05.094563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.094600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.104806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fe720 00:20:10.065 [2024-07-15 15:43:05.106348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.106389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.114987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6890 00:20:10.065 [2024-07-15 15:43:05.116707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.116749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.122002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f3e60 00:20:10.065 [2024-07-15 15:43:05.122892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.122916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.131791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f7970 00:20:10.065 [2024-07-15 15:43:05.132657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.132685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.141235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ddc00 00:20:10.065 [2024-07-15 15:43:05.141998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.142026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.152421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e73e0 00:20:10.065 [2024-07-15 15:43:05.153767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.153794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.162223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eb328 00:20:10.065 [2024-07-15 15:43:05.163591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.163661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.170381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e73e0 00:20:10.065 [2024-07-15 15:43:05.171307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.171334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.182382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190dfdc0 00:20:10.065 [2024-07-15 15:43:05.183819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.183860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:10.065 [2024-07-15 15:43:05.192718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f31b8 00:20:10.065 [2024-07-15 15:43:05.193961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.065 [2024-07-15 15:43:05.194003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.203233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f2510 00:20:10.325 [2024-07-15 15:43:05.204318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.204345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.212721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fa3a0 00:20:10.325 [2024-07-15 15:43:05.213587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.213620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.222194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f0ff8 00:20:10.325 [2024-07-15 15:43:05.223015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.223041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.233311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f2d80 00:20:10.325 [2024-07-15 15:43:05.234284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.234312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.243745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f0bc0 00:20:10.325 [2024-07-15 15:43:05.244891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.244932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.254722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f7100 00:20:10.325 [2024-07-15 15:43:05.256399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.256440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.264209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6890 00:20:10.325 [2024-07-15 15:43:05.265700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.265741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.273492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eb760 00:20:10.325 [2024-07-15 15:43:05.274866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.274912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.282840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190edd58 00:20:10.325 [2024-07-15 15:43:05.284075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.284102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.292711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f0bc0 00:20:10.325 [2024-07-15 15:43:05.293926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.293984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.302585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e1b48 00:20:10.325 [2024-07-15 15:43:05.303419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.303442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.312455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eb760 00:20:10.325 [2024-07-15 15:43:05.313505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.313559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.322003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fd208 00:20:10.325 [2024-07-15 15:43:05.323101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.323176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.331913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eff18 00:20:10.325 [2024-07-15 15:43:05.332527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.332560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.341162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eff18 00:20:10.325 [2024-07-15 15:43:05.341713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.341738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.352089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e1b48 00:20:10.325 [2024-07-15 15:43:05.353307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.353347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.359873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e4578 00:20:10.325 [2024-07-15 15:43:05.360500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.360530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.370429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eb328 00:20:10.325 [2024-07-15 15:43:05.371728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.371767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:10.325 [2024-07-15 15:43:05.379745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fda78 00:20:10.325 [2024-07-15 15:43:05.380659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.325 [2024-07-15 15:43:05.380686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:10.326 [2024-07-15 15:43:05.389013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190de8a8 00:20:10.326 [2024-07-15 15:43:05.389924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.326 [2024-07-15 15:43:05.389951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:10.326 [2024-07-15 15:43:05.398856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f2d80 00:20:10.326 [2024-07-15 15:43:05.399387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.326 [2024-07-15 15:43:05.399410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:10.326 [2024-07-15 15:43:05.409775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190efae0 00:20:10.326 [2024-07-15 15:43:05.411023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.326 [2024-07-15 15:43:05.411052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:10.326 [2024-07-15 15:43:05.418485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f3e60 00:20:10.326 [2024-07-15 15:43:05.420146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.326 [2024-07-15 15:43:05.420187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:10.326 [2024-07-15 15:43:05.427098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e01f8 00:20:10.326 [2024-07-15 15:43:05.427796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.326 [2024-07-15 15:43:05.427821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:10.326 [2024-07-15 15:43:05.438455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fb048 00:20:10.326 [2024-07-15 15:43:05.439747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.326 [2024-07-15 15:43:05.439788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:10.326 [2024-07-15 15:43:05.447625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f4f40 00:20:10.326 [2024-07-15 15:43:05.448636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.326 [2024-07-15 15:43:05.448663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:10.585 [2024-07-15 15:43:05.457946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e12d8 00:20:10.585 [2024-07-15 15:43:05.459196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.585 [2024-07-15 15:43:05.459237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:10.585 [2024-07-15 15:43:05.469857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f81e0 00:20:10.585 [2024-07-15 15:43:05.471439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.585 [2024-07-15 15:43:05.471480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:10.585 [2024-07-15 15:43:05.481514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fa7d8 00:20:10.585 [2024-07-15 15:43:05.483423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.585 [2024-07-15 15:43:05.483498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:10.585 [2024-07-15 15:43:05.488985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e0630 00:20:10.585 [2024-07-15 15:43:05.489724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.585 [2024-07-15 15:43:05.489748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:10.585 [2024-07-15 15:43:05.500934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f57b0 00:20:10.585 [2024-07-15 15:43:05.502081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.585 [2024-07-15 15:43:05.502109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:10.585 [2024-07-15 15:43:05.510203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e01f8 00:20:10.585 [2024-07-15 15:43:05.511264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.511307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.520092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ddc00 00:20:10.586 [2024-07-15 15:43:05.521379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.521420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.531233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f9b30 00:20:10.586 [2024-07-15 15:43:05.532675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.532712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.544354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f96f8 00:20:10.586 [2024-07-15 15:43:05.546263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.546303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.552149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e2c28 00:20:10.586 [2024-07-15 15:43:05.553163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.553190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.565075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e0630 00:20:10.586 [2024-07-15 15:43:05.566611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.566639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.574627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e12d8 00:20:10.586 [2024-07-15 15:43:05.575925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.575966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.584722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f4b08 00:20:10.586 [2024-07-15 15:43:05.585996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.586037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.595166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e2c28 00:20:10.586 [2024-07-15 15:43:05.596091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.596112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.605225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f4b08 00:20:10.586 [2024-07-15 15:43:05.606376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.606435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.615305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f8e88 00:20:10.586 [2024-07-15 15:43:05.616368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.616397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.625461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fa7d8 00:20:10.586 [2024-07-15 15:43:05.626354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.626383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.637439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f46d0 00:20:10.586 [2024-07-15 15:43:05.638856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.638885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.647220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e99d8 00:20:10.586 [2024-07-15 15:43:05.648368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.648396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.657499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e8d30 00:20:10.586 [2024-07-15 15:43:05.658754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.658782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.668082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fd640 00:20:10.586 [2024-07-15 15:43:05.669630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.669670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.677850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e5a90 00:20:10.586 [2024-07-15 15:43:05.679027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.679056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.688026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fe2e8 00:20:10.586 [2024-07-15 15:43:05.689239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.689266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.698202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f0788 00:20:10.586 [2024-07-15 15:43:05.699014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.699055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:10.586 [2024-07-15 15:43:05.708264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e4de8 00:20:10.586 [2024-07-15 15:43:05.708894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.586 [2024-07-15 15:43:05.708948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:10.845 [2024-07-15 15:43:05.721004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ef270 00:20:10.845 [2024-07-15 15:43:05.722315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.845 [2024-07-15 15:43:05.722356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:10.845 [2024-07-15 15:43:05.730358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e3d08 00:20:10.845 [2024-07-15 15:43:05.731605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.845 [2024-07-15 15:43:05.731671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:10.845 [2024-07-15 15:43:05.740271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f5378 00:20:10.845 [2024-07-15 15:43:05.741152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.845 [2024-07-15 15:43:05.741182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:10.845 [2024-07-15 15:43:05.749780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f5be8 00:20:10.845 [2024-07-15 15:43:05.750512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.845 [2024-07-15 15:43:05.750568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:10.845 [2024-07-15 15:43:05.759182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6890 00:20:10.845 [2024-07-15 15:43:05.759785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.845 [2024-07-15 15:43:05.759809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:10.845 [2024-07-15 15:43:05.768688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ef270 00:20:10.845 [2024-07-15 15:43:05.769527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.769564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.779480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e1f80 00:20:10.846 [2024-07-15 15:43:05.780859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.780900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.788427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6890 00:20:10.846 [2024-07-15 15:43:05.789670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.789710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.797816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ea680 00:20:10.846 [2024-07-15 15:43:05.799051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.799093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.807775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e95a0 00:20:10.846 [2024-07-15 15:43:05.808931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.808972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.817041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e49b0 00:20:10.846 [2024-07-15 15:43:05.818140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.818167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.826685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190dece0 00:20:10.846 [2024-07-15 15:43:05.827614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.827648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.835887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e73e0 00:20:10.846 [2024-07-15 15:43:05.836678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.836702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.847294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190eaab8 00:20:10.846 [2024-07-15 15:43:05.848251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.848279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.856126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f7970 00:20:10.846 [2024-07-15 15:43:05.857304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.857333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.865845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fb048 00:20:10.846 [2024-07-15 15:43:05.866939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.866968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.877864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f7538 00:20:10.846 [2024-07-15 15:43:05.879472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.879500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.885084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f7538 00:20:10.846 [2024-07-15 15:43:05.885880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.885938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.895555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ea248 00:20:10.846 [2024-07-15 15:43:05.896469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.896497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.905752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fb048 00:20:10.846 [2024-07-15 15:43:05.906863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.906892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.917305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fcdd0 00:20:10.846 [2024-07-15 15:43:05.919047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.919075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.924395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e3d08 00:20:10.846 [2024-07-15 15:43:05.925203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.925224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.934420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f7538 00:20:10.846 [2024-07-15 15:43:05.935322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.935350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.944461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190dfdc0 00:20:10.846 [2024-07-15 15:43:05.945045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.945065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.955415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fc128 00:20:10.846 [2024-07-15 15:43:05.956736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.956776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.964823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f3a28 00:20:10.846 [2024-07-15 15:43:05.965928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:10.846 [2024-07-15 15:43:05.966001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:10.846 [2024-07-15 15:43:05.974621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f5378 00:20:11.106 [2024-07-15 15:43:05.975714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:05.975770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:05.984476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e6fa8 00:20:11.106 [2024-07-15 15:43:05.985327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:05.985354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:05.994163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ed4e8 00:20:11.106 [2024-07-15 15:43:05.994889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:05.994915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.005532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f4b08 00:20:11.106 [2024-07-15 15:43:06.006350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.006377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.015796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ed4e8 00:20:11.106 [2024-07-15 15:43:06.016902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.016946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.025087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f4f40 00:20:11.106 [2024-07-15 15:43:06.026037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.026064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.035267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f7970 00:20:11.106 [2024-07-15 15:43:06.036251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.036277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.046176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190edd58 00:20:11.106 [2024-07-15 15:43:06.047032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.047065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.059444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fbcf0 00:20:11.106 [2024-07-15 15:43:06.060961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.061001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.068629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fc560 00:20:11.106 [2024-07-15 15:43:06.070260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.070301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.077067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fc128 00:20:11.106 [2024-07-15 15:43:06.077794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.077817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.088434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f4b08 00:20:11.106 [2024-07-15 15:43:06.089726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.089753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.098404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6458 00:20:11.106 [2024-07-15 15:43:06.099942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.099983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.108409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e0630 00:20:11.106 [2024-07-15 15:43:06.109890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.109931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.116734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190de470 00:20:11.106 [2024-07-15 15:43:06.117346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.117369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.126510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f2948 00:20:11.106 [2024-07-15 15:43:06.127591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.127641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.137907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ea248 00:20:11.106 [2024-07-15 15:43:06.139511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.139578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.144893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ed920 00:20:11.106 [2024-07-15 15:43:06.145666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.145703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.154649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f8e88 00:20:11.106 [2024-07-15 15:43:06.155425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.155450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.163825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f2948 00:20:11.106 [2024-07-15 15:43:06.164476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.164499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.173205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f6890 00:20:11.106 [2024-07-15 15:43:06.173844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.106 [2024-07-15 15:43:06.173868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:11.106 [2024-07-15 15:43:06.182966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f5be8 00:20:11.107 [2024-07-15 15:43:06.183663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.107 [2024-07-15 15:43:06.183687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:11.107 [2024-07-15 15:43:06.195319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f8e88 00:20:11.107 [2024-07-15 15:43:06.196887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.107 [2024-07-15 15:43:06.196928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:11.107 [2024-07-15 15:43:06.202340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190fa7d8 00:20:11.107 [2024-07-15 15:43:06.203096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.107 [2024-07-15 15:43:06.203120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:11.107 [2024-07-15 15:43:06.212362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f9b30 00:20:11.107 [2024-07-15 15:43:06.213142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.107 [2024-07-15 15:43:06.213167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:11.107 [2024-07-15 15:43:06.224831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e3498 00:20:11.107 [2024-07-15 15:43:06.226149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.107 [2024-07-15 15:43:06.226190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:11.107 [2024-07-15 15:43:06.234340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ed4e8 00:20:11.366 [2024-07-15 15:43:06.236281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.366 [2024-07-15 15:43:06.236339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:11.366 [2024-07-15 15:43:06.245686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f81e0 00:20:11.366 [2024-07-15 15:43:06.246534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.366 [2024-07-15 15:43:06.246573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:11.366 [2024-07-15 15:43:06.254893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f92c0 00:20:11.366 [2024-07-15 15:43:06.255660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.366 [2024-07-15 15:43:06.255698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:11.366 [2024-07-15 15:43:06.264629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190df988 00:20:11.366 [2024-07-15 15:43:06.265603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.366 [2024-07-15 15:43:06.265631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:11.366 [2024-07-15 15:43:06.273837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190e6b70 00:20:11.366 [2024-07-15 15:43:06.274834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.366 [2024-07-15 15:43:06.274865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:11.366 [2024-07-15 15:43:06.285440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190ef6a8 00:20:11.366 [2024-07-15 15:43:06.287049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.366 [2024-07-15 15:43:06.287095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:11.366 [2024-07-15 15:43:06.292588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263880) with pdu=0x2000190f2948 00:20:11.366 [2024-07-15 15:43:06.293284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.366 [2024-07-15 15:43:06.293307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:11.366 00:20:11.366 Latency(us) 00:20:11.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.366 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:11.366 nvme0n1 : 2.01 25429.18 99.33 0.00 0.00 5028.45 2040.55 14894.55 00:20:11.366 =================================================================================================================== 00:20:11.366 Total : 25429.18 99.33 0.00 0.00 5028.45 2040.55 14894.55 00:20:11.366 0 00:20:11.366 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:11.366 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:11.366 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:11.366 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:11.366 | .driver_specific 00:20:11.366 | .nvme_error 00:20:11.366 | .status_code 00:20:11.366 | .command_transient_transport_error' 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93188 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93188 ']' 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93188 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93188 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:11.625 killing process with pid 93188 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93188' 00:20:11.625 Received shutdown signal, test time was about 2.000000 seconds 00:20:11.625 00:20:11.625 Latency(us) 00:20:11.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.625 =================================================================================================================== 00:20:11.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93188 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93188 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93277 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93277 /var/tmp/bperf.sock 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93277 ']' 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.625 15:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:11.885 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:11.885 Zero copy mechanism will not be used. 00:20:11.885 [2024-07-15 15:43:06.789481] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:11.885 [2024-07-15 15:43:06.789574] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93277 ] 00:20:11.885 [2024-07-15 15:43:06.917208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.885 [2024-07-15 15:43:06.968425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.143 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.143 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:12.143 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:12.143 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:12.143 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:12.143 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.143 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:12.143 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.143 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:12.144 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:12.712 nvme0n1 00:20:12.712 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:12.712 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.712 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:12.712 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.712 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:12.712 15:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:12.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:12.712 Zero copy mechanism will not be used. 00:20:12.712 Running I/O for 2 seconds... 00:20:12.712 [2024-07-15 15:43:07.695056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.695365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.695391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.699849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.700158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.700179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.704575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.704827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.704852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.709099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.709345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.709370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.713915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.714166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.714191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.718341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.718616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.718649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.722839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.723116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.723171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.727629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.727883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.727907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.732144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.732395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.732426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.736695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.736957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.736983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.741253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.741514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.741546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.745680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.745929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.745953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.750080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.712 [2024-07-15 15:43:07.750329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.712 [2024-07-15 15:43:07.750348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.712 [2024-07-15 15:43:07.754576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.754872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.754893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.759189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.759437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.759461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.763723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.764007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.764032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.768374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.768675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.768695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.772909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.773201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.773221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.777472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.777728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.777752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.781951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.782199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.782223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.786450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.786728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.786752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.791055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.791324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.791348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.795676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.795916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.795956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.800236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.800484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.800508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.804790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.805027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.805082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.809282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.809554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.809577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.813898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.814148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.814172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.818454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.818742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.818767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.823148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.823394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.823418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.827772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.828020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.828043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.832372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.832682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.832707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.713 [2024-07-15 15:43:07.837045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.713 [2024-07-15 15:43:07.837356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.713 [2024-07-15 15:43:07.837381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.842132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.842428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.842469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.847216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.847502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.847534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.851930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.852179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.852199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.856461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.856738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.856764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.861078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.861326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.861350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.865669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.865919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.865938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.870202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.870449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.870468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.874749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.875040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.875066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.879893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.880153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.880177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.884847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.885113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.885137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.889410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.889690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.889715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.893975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.894237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.894261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.898608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.898887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.898913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.974 [2024-07-15 15:43:07.903261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.974 [2024-07-15 15:43:07.903497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.974 [2024-07-15 15:43:07.903530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.907878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.908126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.908150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.912398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.912695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.912720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.916969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.917217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.917241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.921574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.921826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.921851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.926253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.926554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.926604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.931302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.931607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.931628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.936265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.936564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.936598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.941424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.941736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.941762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.946376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.946700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.946723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.951473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.951826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.951848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.956438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.956746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.956771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.961453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.961747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.961774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.966330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.966632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.966654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.971356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.971662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.971688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.976260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.976515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.976577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.981032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.981303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.981334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.987265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.987602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.987633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.992310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.992636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.992663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:07.997161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:07.997458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:07.997485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:08.002081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:08.002387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:08.002428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:08.006915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:08.007191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:08.007233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:08.011749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:08.012056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:08.012077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:08.016498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:08.016781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:08.016806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:08.021361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.975 [2024-07-15 15:43:08.021646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.975 [2024-07-15 15:43:08.021671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.975 [2024-07-15 15:43:08.026072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.026340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.026365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.030647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.030938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.030963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.035450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.035712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.035737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.040059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.040315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.040341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.044683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.044952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.044976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.049357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.049623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.049649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.054052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.054318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.054343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.058622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.058928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.058954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.063154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.063426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.063450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.068065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.068319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.068343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.072873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.073163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.073189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.078001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.078283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.078308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.082819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.083116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.083142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.088217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.088473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.088497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.093321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.093672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.093694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.976 [2024-07-15 15:43:08.098449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:12.976 [2024-07-15 15:43:08.098833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.976 [2024-07-15 15:43:08.098875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.104161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.104471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.104496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.109406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.109759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.109785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.114483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.114823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.114865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.119353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.119614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.119654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.124230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.124533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.124570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.129232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.129520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.129556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.134216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.134463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.134487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.138925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.139254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.139278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.143812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.144082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.144105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.148414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.148673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.148697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.152981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.153229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.153254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.157642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.157911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.157950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.162340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.162606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.162626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.167193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.167488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.167508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.171794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.172064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.172088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.176331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.176606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.176630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.180949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.181195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.181220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.237 [2024-07-15 15:43:08.185502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.237 [2024-07-15 15:43:08.185808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.237 [2024-07-15 15:43:08.185833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.190288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.190547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.190582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.194978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.195324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.195348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.199645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.199892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.199911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.204288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.204552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.204587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.208843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.209118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.209143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.213396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.213694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.213719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.218140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.218395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.218415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.222617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.222926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.222947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.227528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.227798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.227822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.232103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.232365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.232390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.236844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.237131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.237151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.241365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.241640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.241664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.245886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.246152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.246176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.250649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.251006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.251033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.255362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.255621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.255645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.259992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.260240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.260263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.264566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.264815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.264838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.269074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.269323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.269347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.273623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.273895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.273934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.278306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.278581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.278636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.283000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.283314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.283368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.287735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.238 [2024-07-15 15:43:08.287996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.238 [2024-07-15 15:43:08.288020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.238 [2024-07-15 15:43:08.292525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.292786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.292809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.297200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.297455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.297479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.301819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.302102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.302125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.306534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.306842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.306869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.311315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.311594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.311627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.316118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.316379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.316402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.320793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.321028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.321082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.325583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.325873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.325897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.330312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.330577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.330610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.335034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.335370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.335394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.339738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.340009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.340034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.344406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.344678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.344701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.348940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.349189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.349213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.353404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.353701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.353725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.358158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.358393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.358431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.239 [2024-07-15 15:43:08.362972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.239 [2024-07-15 15:43:08.363296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.239 [2024-07-15 15:43:08.363320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.368256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.368518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.368549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.373232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.373498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.373547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.377993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.378241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.378265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.382370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.382644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.382667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.387234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.387469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.387493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.391827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.392088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.392112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.396423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.396721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.396745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.401150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.401428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.401452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.405717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.405966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.405990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.410346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.410647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.410687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.414921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.415216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.415239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.419578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.419862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.419885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.424303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.424557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.424593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.428743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.428991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.429014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.433329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.433617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.433641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.437937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.438192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.438216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.442572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.442889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.442930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.447395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.447652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.447676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.451941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.452188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.452227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.456483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.456773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.456797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.461123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.461370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.461393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.465757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.466023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.466046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.470352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.470598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.470621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.474913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.501 [2024-07-15 15:43:08.475257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.501 [2024-07-15 15:43:08.475280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.501 [2024-07-15 15:43:08.479713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.479978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.480018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.484289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.484566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.484597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.488877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.489172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.489192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.493431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.493730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.493755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.497994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.498242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.498266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.502484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.502756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.502780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.507162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.507436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.507460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.511853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.512101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.512125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.516437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.516708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.516731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.520998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.521245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.521269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.525613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.525885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.525909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.530212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.530447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.530502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.534930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.535253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.535277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.539492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.539762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.539785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.544249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.544497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.544547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.548782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.549029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.549055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.553415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.553711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.553732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.557972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.558240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.558259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.562603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.562920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.562945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.567298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.567557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.567589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.571930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.572178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.572201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.576453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.576709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.576732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.581007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.581255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.581278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.585503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.585810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.585834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.590068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.590326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.590350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.594524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.594840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.594864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.599243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.599501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.599533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.603790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.604070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.604093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.608371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.608648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.502 [2024-07-15 15:43:08.608672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.502 [2024-07-15 15:43:08.612915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.502 [2024-07-15 15:43:08.613163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.503 [2024-07-15 15:43:08.613187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 15:43:08.617601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.503 [2024-07-15 15:43:08.617871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.503 [2024-07-15 15:43:08.617896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 15:43:08.622095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.503 [2024-07-15 15:43:08.622342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.503 [2024-07-15 15:43:08.622366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.503 [2024-07-15 15:43:08.626874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.503 [2024-07-15 15:43:08.627261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.503 [2024-07-15 15:43:08.627285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.632188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.632448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.632472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.637257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.637527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.637576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.641898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.642148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.642203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.646452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.646720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.646743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.651193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.651444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.651467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.655839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.656099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.656123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.660291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.660549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.660572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.664925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.665174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.665198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.669508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.669760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.669783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.673994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.674255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.674279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.678610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.678913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.678939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.683144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.683409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.683433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.687844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.688078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.688102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.692365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.692624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.692648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.697007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.697268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.697287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.701498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.701836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.701857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.706045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.706294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.706312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.710686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.711005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.711025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.715480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.715739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.715762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.720094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.720328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.720351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.724643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.724904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.724928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.729178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.764 [2024-07-15 15:43:08.729425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.764 [2024-07-15 15:43:08.729448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.764 [2024-07-15 15:43:08.733727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.733974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.733997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.738331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.738643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.738668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.743034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.743320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.743344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.747668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.747938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.747961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.752272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.752519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.752551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.756843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.757091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.757140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.761400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.761692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.761716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.766072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.766334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.766357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.770661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.770976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.771001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.775365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.775631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.775687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.780067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.780302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.780326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.784656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.784903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.784927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.789231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.789491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.789515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.793790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.794038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.794062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.798445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.798711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.798736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.803001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.803291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.803314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.807756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.808035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.808058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.812462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.812709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.812764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.817064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.817340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.817364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.821690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.821972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.821995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.826234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.826495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.826528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.830931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.831256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.831312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.835780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.836061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.836085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.840352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.765 [2024-07-15 15:43:08.840625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.765 [2024-07-15 15:43:08.840648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.765 [2024-07-15 15:43:08.844977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.845225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.845249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.849415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.849675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.849699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.854023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.854269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.854293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.858497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.858758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.858785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.863251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.863498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.863517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.867817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.868099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.868135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.872549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.872822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.872846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.877167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.877431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.877455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.881787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.882084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.882108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.886572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.886865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.886889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.766 [2024-07-15 15:43:08.891633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:13.766 [2024-07-15 15:43:08.891963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.766 [2024-07-15 15:43:08.891987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.896770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.897019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.897042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.901703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.901989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.902013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.906308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.906577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.906600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.910764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.911113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.911154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.915579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.915888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.915911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.920236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.920495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.920519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.924830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.925076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.925099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.929467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.929743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.929767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.934059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.934298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.934337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.938630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.938929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.938953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.943295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.943560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.943593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.947862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.948128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.948152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.952589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.952843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.952862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.957081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.957372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.957391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.961912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.962176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.962202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.966373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.966645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.966669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.971011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.971296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.971320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.975646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.975907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.975930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.980208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.980456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.980479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.027 [2024-07-15 15:43:08.984894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.027 [2024-07-15 15:43:08.985190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.027 [2024-07-15 15:43:08.985215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:08.989437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:08.989696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:08.989719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:08.994156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:08.994416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:08.994441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:08.998677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:08.998956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:08.998981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.003410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.003668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.003691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.008043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.008304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.008327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.012698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.012979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.013003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.017237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.017506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.017553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.021766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.022046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.022070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.026438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.026705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.026729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.031030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.031329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.031353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.035688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.035948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.035972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.040281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.040583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.040617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.044925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.045173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.045198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.049395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.049652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.049671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.053888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.054166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.054185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.058662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.058975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.059001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.063758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.064066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.064090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.068810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.069064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.069089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.073441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.073706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.073730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.078090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.078359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.078383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.082684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.083039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.083065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.087538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.087819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.087843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.092497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.092813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.092838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.097447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.097747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.097772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.028 [2024-07-15 15:43:09.102572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.028 [2024-07-15 15:43:09.102927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.028 [2024-07-15 15:43:09.103108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.108220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.108520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.108571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.113343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.113640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.113674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.118399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.118697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.118722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.123352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.123648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.123681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.128328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.128616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.128650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.133331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.133629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.133654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.138199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.138454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.138478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.143465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.143808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.143833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.148778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.149102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.149143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.029 [2024-07-15 15:43:09.154357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.029 [2024-07-15 15:43:09.154734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.029 [2024-07-15 15:43:09.154761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.290 [2024-07-15 15:43:09.159982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.290 [2024-07-15 15:43:09.160224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.290 [2024-07-15 15:43:09.160279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.290 [2024-07-15 15:43:09.165266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.290 [2024-07-15 15:43:09.165520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.290 [2024-07-15 15:43:09.165568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.290 [2024-07-15 15:43:09.170188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.290 [2024-07-15 15:43:09.170457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.290 [2024-07-15 15:43:09.170482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.290 [2024-07-15 15:43:09.175051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.290 [2024-07-15 15:43:09.175358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.290 [2024-07-15 15:43:09.175382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.290 [2024-07-15 15:43:09.180038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.180292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.180347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.184790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.185046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.185070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.189514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.189795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.189831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.194275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.194529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.194610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.199106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.199419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.199444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.203905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.204159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.204178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.208459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.208755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.208776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.213181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.213436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.213460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.218089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.218340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.218364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.222864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.223149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.223190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.227650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.227904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.227928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.232299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.232595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.232620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.237222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.237477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.237500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.241897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.242165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.242189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.246614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.246915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.246939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.251284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.251537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.251571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.256190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.256446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.256470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.260836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.261102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.261126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.265547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.265849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.265874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.270241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.270490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.270515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.275155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.275441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.275477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.279852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.280107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.280131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.284373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.284649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.284673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.289013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.289281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.289306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.293919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.294159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.294183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.298473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.298774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.298839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.303239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.303506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.303553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.307981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.308236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.308260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.312872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.291 [2024-07-15 15:43:09.313157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.291 [2024-07-15 15:43:09.313181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.291 [2024-07-15 15:43:09.317700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.317968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.317992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.322316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.322610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.322634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.327429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.327728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.327751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.332133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.332388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.332412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.336875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.337145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.337168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.341822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.342088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.342127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.346705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.347014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.347041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.351596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.351907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.351946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.356709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.357035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.357059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.361520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.361814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.361838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.366372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.366642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.366666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.370976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.371323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.371351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.375662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.375911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.375934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.380396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.380662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.380688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.385003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.385251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.385275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.389481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.389761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.389786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.394005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.394252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.394275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.398522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.398782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.398844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.403060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.403360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.403384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.407802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.408069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.408094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.412407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.412664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.412683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.292 [2024-07-15 15:43:09.417320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.292 [2024-07-15 15:43:09.417680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.292 [2024-07-15 15:43:09.417702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.422332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.422578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.422613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.427353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.427601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.427620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.431838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.432086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.432137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.436372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.436649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.436674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.440982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.441241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.441265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.445418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.445708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.445732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.450036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.450296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.450320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.454482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.454741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.454760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.459007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.459340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.459360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.463488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.463756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.463780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.468029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.468263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.468287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.472606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.472866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.472889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.477125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.477373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.477397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.481724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.481971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.481995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.486153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.486401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.486425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.490717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.491037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.553 [2024-07-15 15:43:09.491063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.553 [2024-07-15 15:43:09.495346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.553 [2024-07-15 15:43:09.495594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.495613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.499780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.500073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.500092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.504307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.504597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.504621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.508890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.509202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.509227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.513453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.513713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.513736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.517992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.518240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.518264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.522415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.522693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.522717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.527072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.527375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.527399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.531704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.531958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.531998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.536257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.536517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.536581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.540895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.541141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.541164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.545364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.545641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.545665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.550033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.550267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.550305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.554516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.554772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.554815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.559222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.559454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.559479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.563749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.564009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.564032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.568241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.568489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.568512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.572851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.573097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.573151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.577285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.577548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.577599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.581873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.582153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.582178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.586457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.586716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.586740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.591022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.591294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.591332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.595615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.595875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.595898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.600223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.600483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.600507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.604724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.604973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.604996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.609216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.609477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.609500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.613871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.614148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.614171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.618423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.618704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.618727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.623067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.623362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.554 [2024-07-15 15:43:09.623386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.554 [2024-07-15 15:43:09.627594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.554 [2024-07-15 15:43:09.627843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.627866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.632124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.632372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.632396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.636592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.636850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.636873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.641030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.641290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.641314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.645621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.645874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.645897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.650126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.650372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.650396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.654638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.654929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.654953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.659169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.659444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.659468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.663837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.664101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.664124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.668324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.668593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.668617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.672863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.673129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.673152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.555 [2024-07-15 15:43:09.677402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.555 [2024-07-15 15:43:09.677706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.555 [2024-07-15 15:43:09.677747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.814 [2024-07-15 15:43:09.682711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.814 [2024-07-15 15:43:09.683079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.814 [2024-07-15 15:43:09.683106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.814 [2024-07-15 15:43:09.687533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2263bc0) with pdu=0x2000190fef90 00:20:14.814 [2024-07-15 15:43:09.687854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.814 [2024-07-15 15:43:09.687895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.814 00:20:14.814 Latency(us) 00:20:14.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.814 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:14.814 nvme0n1 : 2.00 6592.57 824.07 0.00 0.00 2421.88 1951.19 5928.03 00:20:14.814 =================================================================================================================== 00:20:14.814 Total : 6592.57 824.07 0.00 0.00 2421.88 1951.19 5928.03 00:20:14.814 0 00:20:14.814 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:14.814 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:14.814 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:14.814 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:14.814 | .driver_specific 00:20:14.814 | .nvme_error 00:20:14.814 | .status_code 00:20:14.814 | .command_transient_transport_error' 00:20:15.073 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 425 > 0 )) 00:20:15.073 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93277 00:20:15.073 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93277 ']' 00:20:15.073 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93277 00:20:15.073 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:15.073 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.073 15:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93277 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:15.073 killing process with pid 93277 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93277' 00:20:15.073 Received shutdown signal, test time was about 2.000000 seconds 00:20:15.073 00:20:15.073 Latency(us) 00:20:15.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.073 =================================================================================================================== 00:20:15.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93277 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93277 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93007 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93007 ']' 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93007 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93007 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:15.073 killing process with pid 93007 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93007' 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93007 00:20:15.073 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93007 00:20:15.332 00:20:15.332 real 0m15.500s 00:20:15.332 user 0m29.684s 00:20:15.332 sys 0m4.165s 00:20:15.332 ************************************ 00:20:15.332 END TEST nvmf_digest_error 00:20:15.332 ************************************ 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.332 rmmod nvme_tcp 00:20:15.332 rmmod nvme_fabrics 00:20:15.332 rmmod nvme_keyring 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93007 ']' 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93007 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93007 ']' 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93007 00:20:15.332 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93007) - No such process 00:20:15.332 Process with pid 93007 is not found 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93007 is not found' 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.332 15:43:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.592 15:43:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:15.592 00:20:15.592 real 0m33.205s 00:20:15.592 user 1m2.060s 00:20:15.592 sys 0m8.687s 00:20:15.592 15:43:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.592 15:43:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:15.592 ************************************ 00:20:15.592 END TEST nvmf_digest 00:20:15.592 ************************************ 00:20:15.592 15:43:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:15.592 15:43:10 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:20:15.592 15:43:10 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:20:15.592 15:43:10 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:15.592 15:43:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:15.592 15:43:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.592 15:43:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.592 ************************************ 00:20:15.592 START TEST nvmf_mdns_discovery 00:20:15.592 ************************************ 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:15.592 * Looking for test storage... 00:20:15.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:15.592 Cannot find device "nvmf_tgt_br" 00:20:15.592 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:20:15.593 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.593 Cannot find device "nvmf_tgt_br2" 00:20:15.593 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:20:15.593 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:15.593 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:15.593 Cannot find device "nvmf_tgt_br" 00:20:15.593 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:20:15.593 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:15.593 Cannot find device "nvmf_tgt_br2" 00:20:15.593 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:20:15.593 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:15.852 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.853 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.853 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:15.853 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:16.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:20:16.113 00:20:16.113 --- 10.0.0.2 ping statistics --- 00:20:16.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.113 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:16.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:16.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:16.113 00:20:16.113 --- 10.0.0.3 ping statistics --- 00:20:16.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.113 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:16.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:16.113 00:20:16.113 --- 10.0.0.1 ping statistics --- 00:20:16.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.113 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:16.113 15:43:10 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=93550 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 93550 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 93550 ']' 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.113 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.113 [2024-07-15 15:43:11.080442] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:16.113 [2024-07-15 15:43:11.080543] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.113 [2024-07-15 15:43:11.222606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.372 [2024-07-15 15:43:11.291851] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.372 [2024-07-15 15:43:11.291911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.372 [2024-07-15 15:43:11.291925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.372 [2024-07-15 15:43:11.291935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.372 [2024-07-15 15:43:11.291943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.372 [2024-07-15 15:43:11.291979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.372 [2024-07-15 15:43:11.440062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.372 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.373 [2024-07-15 15:43:11.448166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.373 null0 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.373 null1 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.373 null2 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.373 null3 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=93587 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 93587 /tmp/host.sock 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 93587 ']' 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.373 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.373 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.632 [2024-07-15 15:43:11.550067] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:16.632 [2024-07-15 15:43:11.550158] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93587 ] 00:20:16.632 [2024-07-15 15:43:11.689933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.632 [2024-07-15 15:43:11.758316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.891 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.891 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:16.891 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:16.891 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:16.891 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:16.891 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=93602 00:20:16.891 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:16.891 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:16.891 15:43:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:16.891 Process 974 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:16.891 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:16.891 Successfully dropped root privileges. 00:20:16.891 avahi-daemon 0.8 starting up. 00:20:16.891 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:16.891 Successfully called chroot(). 00:20:16.891 Successfully dropped remaining capabilities. 00:20:16.891 No service file found in /etc/avahi/services. 00:20:17.827 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:17.827 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:17.827 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:17.827 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:17.827 Network interface enumeration completed. 00:20:17.827 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:20:17.827 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:20:17.827 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:20:17.827 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:20:17.827 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 3696730456. 00:20:17.827 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:17.827 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.827 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:18.085 15:43:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.085 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:18.343 [2024-07-15 15:43:13.272971] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.343 [2024-07-15 15:43:13.324600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.343 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.344 [2024-07-15 15:43:13.364525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.344 [2024-07-15 15:43:13.372520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.344 15:43:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:20:19.322 [2024-07-15 15:43:14.172972] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:19.899 [2024-07-15 15:43:14.772988] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:19.899 [2024-07-15 15:43:14.773013] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:19.899 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:19.899 cookie is 0 00:20:19.899 is_local: 1 00:20:19.899 our_own: 0 00:20:19.899 wide_area: 0 00:20:19.899 multicast: 1 00:20:19.899 cached: 1 00:20:19.899 [2024-07-15 15:43:14.872982] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:19.899 [2024-07-15 15:43:14.873003] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:19.899 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:19.899 cookie is 0 00:20:19.899 is_local: 1 00:20:19.899 our_own: 0 00:20:19.899 wide_area: 0 00:20:19.899 multicast: 1 00:20:19.899 cached: 1 00:20:19.899 [2024-07-15 15:43:14.873029] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:19.899 [2024-07-15 15:43:14.972983] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:19.899 [2024-07-15 15:43:14.973004] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:19.899 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:19.899 cookie is 0 00:20:19.899 is_local: 1 00:20:19.899 our_own: 0 00:20:19.899 wide_area: 0 00:20:19.899 multicast: 1 00:20:19.899 cached: 1 00:20:20.158 [2024-07-15 15:43:15.072981] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:20.158 [2024-07-15 15:43:15.073001] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:20.158 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:20.158 cookie is 0 00:20:20.158 is_local: 1 00:20:20.158 our_own: 0 00:20:20.158 wide_area: 0 00:20:20.158 multicast: 1 00:20:20.158 cached: 1 00:20:20.158 [2024-07-15 15:43:15.073025] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:20.723 [2024-07-15 15:43:15.786430] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:20.723 [2024-07-15 15:43:15.786454] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:20.723 [2024-07-15 15:43:15.786485] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:20.981 [2024-07-15 15:43:15.872535] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:20:20.981 [2024-07-15 15:43:15.929145] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:20.981 [2024-07-15 15:43:15.929171] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:20.981 [2024-07-15 15:43:15.976239] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:20.981 [2024-07-15 15:43:15.976259] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:20.981 [2024-07-15 15:43:15.976289] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:20.981 [2024-07-15 15:43:16.062329] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:20:21.239 [2024-07-15 15:43:16.117797] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:21.239 [2024-07-15 15:43:16.117823] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.767 15:43:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:20:24.703 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:20:24.703 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:24.703 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:24.703 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.703 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:24.703 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.703 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.962 [2024-07-15 15:43:19.895135] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:24.962 [2024-07-15 15:43:19.895386] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:24.962 [2024-07-15 15:43:19.895410] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:24.962 [2024-07-15 15:43:19.895473] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:24.962 [2024-07-15 15:43:19.895485] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.962 [2024-07-15 15:43:19.903039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:24.962 [2024-07-15 15:43:19.903395] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:24.962 [2024-07-15 15:43:19.903438] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.962 15:43:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:20:24.962 [2024-07-15 15:43:20.034522] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:20:24.962 [2024-07-15 15:43:20.034739] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:20:25.221 [2024-07-15 15:43:20.092833] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:25.221 [2024-07-15 15:43:20.092876] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:25.221 [2024-07-15 15:43:20.092899] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:25.221 [2024-07-15 15:43:20.092916] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:25.221 [2024-07-15 15:43:20.101021] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:25.221 [2024-07-15 15:43:20.101044] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:25.221 [2024-07-15 15:43:20.101066] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:25.221 [2024-07-15 15:43:20.101081] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:25.221 [2024-07-15 15:43:20.138681] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:25.221 [2024-07-15 15:43:20.138700] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:25.221 [2024-07-15 15:43:20.146693] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:25.221 [2024-07-15 15:43:20.146710] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:25.787 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:20:25.787 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:25.787 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.787 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:25.787 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.787 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:25.787 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:26.046 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.046 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:26.046 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:20:26.046 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:26.046 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.046 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.046 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:26.046 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:26.046 15:43:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:26.046 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:26.047 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.308 [2024-07-15 15:43:21.220299] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:26.308 [2024-07-15 15:43:21.220342] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:26.308 [2024-07-15 15:43:21.220373] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:26.308 [2024-07-15 15:43:21.220385] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.308 [2024-07-15 15:43:21.227506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.308 [2024-07-15 15:43:21.227578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.308 [2024-07-15 15:43:21.227591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.308 [2024-07-15 15:43:21.227599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.308 [2024-07-15 15:43:21.227608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.308 [2024-07-15 15:43:21.227616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.308 [2024-07-15 15:43:21.227625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.308 [2024-07-15 15:43:21.227632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.308 [2024-07-15 15:43:21.227640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.308 [2024-07-15 15:43:21.228318] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:26.308 [2024-07-15 15:43:21.228376] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:26.308 [2024-07-15 15:43:21.229992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.308 [2024-07-15 15:43:21.230017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.308 [2024-07-15 15:43:21.230027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.308 [2024-07-15 15:43:21.230034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.308 [2024-07-15 15:43:21.230043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.308 [2024-07-15 15:43:21.230050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.308 [2024-07-15 15:43:21.230059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.308 [2024-07-15 15:43:21.230066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.308 [2024-07-15 15:43:21.230073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.308 15:43:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:20:26.308 [2024-07-15 15:43:21.237459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.308 [2024-07-15 15:43:21.239963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.308 [2024-07-15 15:43:21.247481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.308 [2024-07-15 15:43:21.247598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.308 [2024-07-15 15:43:21.247618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.308 [2024-07-15 15:43:21.247628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.308 [2024-07-15 15:43:21.247642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.308 [2024-07-15 15:43:21.247655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.308 [2024-07-15 15:43:21.247663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.308 [2024-07-15 15:43:21.247672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.308 [2024-07-15 15:43:21.247686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.308 [2024-07-15 15:43:21.249973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.308 [2024-07-15 15:43:21.250061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.308 [2024-07-15 15:43:21.250095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.308 [2024-07-15 15:43:21.250104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.308 [2024-07-15 15:43:21.250118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.308 [2024-07-15 15:43:21.250129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.308 [2024-07-15 15:43:21.250137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.308 [2024-07-15 15:43:21.250144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.308 [2024-07-15 15:43:21.250156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.308 [2024-07-15 15:43:21.257572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.308 [2024-07-15 15:43:21.257666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.308 [2024-07-15 15:43:21.257686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.308 [2024-07-15 15:43:21.257695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.308 [2024-07-15 15:43:21.257709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.308 [2024-07-15 15:43:21.257722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.308 [2024-07-15 15:43:21.257730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.308 [2024-07-15 15:43:21.257738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.308 [2024-07-15 15:43:21.257751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.308 [2024-07-15 15:43:21.260032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.308 [2024-07-15 15:43:21.260100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.308 [2024-07-15 15:43:21.260118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.308 [2024-07-15 15:43:21.260127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.308 [2024-07-15 15:43:21.260140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.308 [2024-07-15 15:43:21.260153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.308 [2024-07-15 15:43:21.260160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.308 [2024-07-15 15:43:21.260168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.308 [2024-07-15 15:43:21.260179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.308 [2024-07-15 15:43:21.267639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.308 [2024-07-15 15:43:21.267726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.308 [2024-07-15 15:43:21.267745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.308 [2024-07-15 15:43:21.267755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.308 [2024-07-15 15:43:21.267779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.308 [2024-07-15 15:43:21.267794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.308 [2024-07-15 15:43:21.267803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.308 [2024-07-15 15:43:21.267811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.308 [2024-07-15 15:43:21.267825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.270074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.309 [2024-07-15 15:43:21.270141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.270159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.309 [2024-07-15 15:43:21.270168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.270182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.270194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.270202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.270209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.309 [2024-07-15 15:43:21.270221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.277701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.309 [2024-07-15 15:43:21.277784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.277804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.309 [2024-07-15 15:43:21.277814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.277829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.277842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.277850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.277874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.309 [2024-07-15 15:43:21.277902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.280114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.309 [2024-07-15 15:43:21.280197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.280216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.309 [2024-07-15 15:43:21.280225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.280238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.280250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.280258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.280265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.309 [2024-07-15 15:43:21.280277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.287751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.309 [2024-07-15 15:43:21.287824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.287843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.309 [2024-07-15 15:43:21.287852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.287865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.287892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.287900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.287907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.309 [2024-07-15 15:43:21.287920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.290171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.309 [2024-07-15 15:43:21.290252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.290270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.309 [2024-07-15 15:43:21.290278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.290293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.290304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.290312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.290319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.309 [2024-07-15 15:43:21.290331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.297796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.309 [2024-07-15 15:43:21.297893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.297912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.309 [2024-07-15 15:43:21.297920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.297934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.297946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.297954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.297961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.309 [2024-07-15 15:43:21.297973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.300224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.309 [2024-07-15 15:43:21.300290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.300309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.309 [2024-07-15 15:43:21.300317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.300331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.300343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.300350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.300358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.309 [2024-07-15 15:43:21.300370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.307852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.309 [2024-07-15 15:43:21.307935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.307953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.309 [2024-07-15 15:43:21.307963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.307976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.307988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.307996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.308004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.309 [2024-07-15 15:43:21.308017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.310266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.309 [2024-07-15 15:43:21.310346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.310364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.309 [2024-07-15 15:43:21.310373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.310387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.310414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.310423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.310431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.309 [2024-07-15 15:43:21.310444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.317911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.309 [2024-07-15 15:43:21.318016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.318035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.309 [2024-07-15 15:43:21.318044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.318058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.318071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.318079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.318086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.309 [2024-07-15 15:43:21.318099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.309 [2024-07-15 15:43:21.320321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.309 [2024-07-15 15:43:21.320410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.309 [2024-07-15 15:43:21.320428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.309 [2024-07-15 15:43:21.320437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.309 [2024-07-15 15:43:21.320451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.309 [2024-07-15 15:43:21.320478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.309 [2024-07-15 15:43:21.320487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.309 [2024-07-15 15:43:21.320495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.310 [2024-07-15 15:43:21.320508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.310 [2024-07-15 15:43:21.327986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.310 [2024-07-15 15:43:21.328070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.310 [2024-07-15 15:43:21.328088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.310 [2024-07-15 15:43:21.328097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.310 [2024-07-15 15:43:21.328111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.310 [2024-07-15 15:43:21.328124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.310 [2024-07-15 15:43:21.328131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.310 [2024-07-15 15:43:21.328139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.310 [2024-07-15 15:43:21.328152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.310 [2024-07-15 15:43:21.330392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.310 [2024-07-15 15:43:21.330474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.310 [2024-07-15 15:43:21.330492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.310 [2024-07-15 15:43:21.330501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.310 [2024-07-15 15:43:21.330515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.310 [2024-07-15 15:43:21.330553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.310 [2024-07-15 15:43:21.330564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.310 [2024-07-15 15:43:21.330572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.310 [2024-07-15 15:43:21.330584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.310 [2024-07-15 15:43:21.338043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.310 [2024-07-15 15:43:21.338147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.310 [2024-07-15 15:43:21.338166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.310 [2024-07-15 15:43:21.338175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.310 [2024-07-15 15:43:21.338189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.310 [2024-07-15 15:43:21.338201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.310 [2024-07-15 15:43:21.338208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.310 [2024-07-15 15:43:21.338216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.310 [2024-07-15 15:43:21.338228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.310 [2024-07-15 15:43:21.340446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.310 [2024-07-15 15:43:21.340514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.310 [2024-07-15 15:43:21.340559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.310 [2024-07-15 15:43:21.340569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.310 [2024-07-15 15:43:21.340583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.310 [2024-07-15 15:43:21.340610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.310 [2024-07-15 15:43:21.340620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.310 [2024-07-15 15:43:21.340628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.310 [2024-07-15 15:43:21.340641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.310 [2024-07-15 15:43:21.348114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.310 [2024-07-15 15:43:21.348198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.310 [2024-07-15 15:43:21.348216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.310 [2024-07-15 15:43:21.348225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.310 [2024-07-15 15:43:21.348239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.310 [2024-07-15 15:43:21.348250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.310 [2024-07-15 15:43:21.348258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.310 [2024-07-15 15:43:21.348266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.310 [2024-07-15 15:43:21.348278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.310 [2024-07-15 15:43:21.350489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:26.310 [2024-07-15 15:43:21.350564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.310 [2024-07-15 15:43:21.350583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2b230 with addr=10.0.0.3, port=4420 00:20:26.310 [2024-07-15 15:43:21.350592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2b230 is same with the state(5) to be set 00:20:26.310 [2024-07-15 15:43:21.350606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2b230 (9): Bad file descriptor 00:20:26.310 [2024-07-15 15:43:21.350634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:26.310 [2024-07-15 15:43:21.350643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:26.310 [2024-07-15 15:43:21.350651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:26.310 [2024-07-15 15:43:21.350664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.310 [2024-07-15 15:43:21.358171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.310 [2024-07-15 15:43:21.358254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.310 [2024-07-15 15:43:21.358273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd72350 with addr=10.0.0.2, port=4420 00:20:26.310 [2024-07-15 15:43:21.358281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72350 is same with the state(5) to be set 00:20:26.310 [2024-07-15 15:43:21.358295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd72350 (9): Bad file descriptor 00:20:26.310 [2024-07-15 15:43:21.358307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.310 [2024-07-15 15:43:21.358315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:26.310 [2024-07-15 15:43:21.358322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.310 [2024-07-15 15:43:21.358334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.310 [2024-07-15 15:43:21.359658] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:20:26.310 [2024-07-15 15:43:21.359682] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:26.310 [2024-07-15 15:43:21.359701] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:26.310 [2024-07-15 15:43:21.359731] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:26.310 [2024-07-15 15:43:21.359745] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:26.310 [2024-07-15 15:43:21.359757] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:26.569 [2024-07-15 15:43:21.445732] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:26.569 [2024-07-15 15:43:21.445787] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:27.137 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:20:27.137 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:27.137 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.137 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:27.137 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.137 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:27.137 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:27.137 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.396 15:43:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:20:27.655 [2024-07-15 15:43:22.573011] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.591 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.850 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:20:28.850 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:20:28.850 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.851 [2024-07-15 15:43:23.760429] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:20:28.851 2024/07/15 15:43:23 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:28.851 request: 00:20:28.851 { 00:20:28.851 "method": "bdev_nvme_start_mdns_discovery", 00:20:28.851 "params": { 00:20:28.851 "name": "mdns", 00:20:28.851 "svcname": "_nvme-disc._http", 00:20:28.851 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:28.851 } 00:20:28.851 } 00:20:28.851 Got JSON-RPC error response 00:20:28.851 GoRPCClient: error on JSON-RPC call 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:28.851 15:43:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:20:29.417 [2024-07-15 15:43:24.349055] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:29.417 [2024-07-15 15:43:24.449053] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:29.676 [2024-07-15 15:43:24.549055] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.676 [2024-07-15 15:43:24.549075] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:29.676 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.676 cookie is 0 00:20:29.676 is_local: 1 00:20:29.676 our_own: 0 00:20:29.676 wide_area: 0 00:20:29.676 multicast: 1 00:20:29.676 cached: 1 00:20:29.676 [2024-07-15 15:43:24.649054] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.676 [2024-07-15 15:43:24.649075] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:29.676 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.676 cookie is 0 00:20:29.676 is_local: 1 00:20:29.676 our_own: 0 00:20:29.676 wide_area: 0 00:20:29.676 multicast: 1 00:20:29.676 cached: 1 00:20:29.676 [2024-07-15 15:43:24.649101] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:29.676 [2024-07-15 15:43:24.749054] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.676 [2024-07-15 15:43:24.749077] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:29.676 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.676 cookie is 0 00:20:29.676 is_local: 1 00:20:29.676 our_own: 0 00:20:29.676 wide_area: 0 00:20:29.676 multicast: 1 00:20:29.676 cached: 1 00:20:29.934 [2024-07-15 15:43:24.849054] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.934 [2024-07-15 15:43:24.849075] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:29.934 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.934 cookie is 0 00:20:29.934 is_local: 1 00:20:29.934 our_own: 0 00:20:29.934 wide_area: 0 00:20:29.934 multicast: 1 00:20:29.934 cached: 1 00:20:29.934 [2024-07-15 15:43:24.849099] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:30.528 [2024-07-15 15:43:25.557865] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:30.528 [2024-07-15 15:43:25.557887] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:30.528 [2024-07-15 15:43:25.557918] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:30.528 [2024-07-15 15:43:25.643974] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:20:30.787 [2024-07-15 15:43:25.703777] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:30.787 [2024-07-15 15:43:25.703803] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:30.787 [2024-07-15 15:43:25.757778] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:30.787 [2024-07-15 15:43:25.757800] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:30.787 [2024-07-15 15:43:25.757830] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:30.787 [2024-07-15 15:43:25.843908] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:20:30.787 [2024-07-15 15:43:25.903478] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:30.787 [2024-07-15 15:43:25.903504] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.073 [2024-07-15 15:43:28.957513] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:20:34.073 2024/07/15 15:43:28 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:34.073 request: 00:20:34.073 { 00:20:34.073 "method": "bdev_nvme_start_mdns_discovery", 00:20:34.073 "params": { 00:20:34.073 "name": "cdc", 00:20:34.073 "svcname": "_nvme-disc._tcp", 00:20:34.073 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:34.073 } 00:20:34.073 } 00:20:34.073 Got JSON-RPC error response 00:20:34.073 GoRPCClient: error on JSON-RPC call 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:34.073 15:43:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 93587 00:20:34.073 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 93587 00:20:34.073 [2024-07-15 15:43:29.149101] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 93602 00:20:34.333 Got SIGTERM, quitting. 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:20:34.333 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:34.333 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:34.333 avahi-daemon 0.8 exiting. 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:34.333 rmmod nvme_tcp 00:20:34.333 rmmod nvme_fabrics 00:20:34.333 rmmod nvme_keyring 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 93550 ']' 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 93550 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 93550 ']' 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 93550 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93550 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:34.333 killing process with pid 93550 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93550' 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 93550 00:20:34.333 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 93550 00:20:34.594 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:34.594 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:34.594 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:34.594 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.594 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:34.594 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.594 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.594 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.594 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:34.594 00:20:34.594 real 0m19.000s 00:20:34.594 user 0m37.713s 00:20:34.594 sys 0m1.787s 00:20:34.595 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.595 ************************************ 00:20:34.595 END TEST nvmf_mdns_discovery 00:20:34.595 15:43:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.595 ************************************ 00:20:34.595 15:43:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:34.595 15:43:29 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:20:34.595 15:43:29 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:34.595 15:43:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:34.595 15:43:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.595 15:43:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.595 ************************************ 00:20:34.595 START TEST nvmf_host_multipath 00:20:34.595 ************************************ 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:34.595 * Looking for test storage... 00:20:34.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:34.595 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:34.855 Cannot find device "nvmf_tgt_br" 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.855 Cannot find device "nvmf_tgt_br2" 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:34.855 Cannot find device "nvmf_tgt_br" 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:34.855 Cannot find device "nvmf_tgt_br2" 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:34.855 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.856 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.856 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.856 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:34.856 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:34.856 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.856 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.856 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.856 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:35.115 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:35.115 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:35.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:20:35.115 00:20:35.115 --- 10.0.0.2 ping statistics --- 00:20:35.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.115 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:35.115 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:35.115 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:35.115 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:20:35.115 00:20:35.115 --- 10.0.0.3 ping statistics --- 00:20:35.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.115 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:35.115 15:43:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:35.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:35.115 00:20:35.115 --- 10.0.0.1 ping statistics --- 00:20:35.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.115 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94158 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94158 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94158 ']' 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.115 15:43:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:35.115 [2024-07-15 15:43:30.087838] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:35.115 [2024-07-15 15:43:30.087927] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.115 [2024-07-15 15:43:30.225330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:35.374 [2024-07-15 15:43:30.296693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.374 [2024-07-15 15:43:30.296947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.374 [2024-07-15 15:43:30.297045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.374 [2024-07-15 15:43:30.297138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.374 [2024-07-15 15:43:30.297220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.374 [2024-07-15 15:43:30.297505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.374 [2024-07-15 15:43:30.297516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.941 15:43:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.941 15:43:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:35.941 15:43:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.941 15:43:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.941 15:43:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:36.200 15:43:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.200 15:43:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94158 00:20:36.200 15:43:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:36.459 [2024-07-15 15:43:31.354400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.459 15:43:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:36.459 Malloc0 00:20:36.717 15:43:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:36.976 15:43:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.234 15:43:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.234 [2024-07-15 15:43:32.342766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.234 15:43:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:37.492 [2024-07-15 15:43:32.546807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94262 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94262 /var/tmp/bdevperf.sock 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94262 ']' 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.492 15:43:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:38.059 15:43:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.059 15:43:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:38.059 15:43:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:38.059 15:43:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:38.624 Nvme0n1 00:20:38.624 15:43:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:38.882 Nvme0n1 00:20:38.882 15:43:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:38.882 15:43:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:39.840 15:43:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:39.840 15:43:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:40.098 15:43:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:40.355 15:43:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:40.355 15:43:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94158 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:40.355 15:43:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94336 00:20:40.355 15:43:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:46.944 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:46.944 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:46.944 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:46.944 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:46.944 Attaching 4 probes... 00:20:46.944 @path[10.0.0.2, 4421]: 20047 00:20:46.945 @path[10.0.0.2, 4421]: 20109 00:20:46.945 @path[10.0.0.2, 4421]: 20284 00:20:46.945 @path[10.0.0.2, 4421]: 20313 00:20:46.945 @path[10.0.0.2, 4421]: 19624 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94336 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:46.945 15:43:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:47.203 15:43:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:47.203 15:43:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94466 00:20:47.203 15:43:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94158 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:47.203 15:43:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:53.911 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:53.911 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:53.911 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:53.911 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:53.911 Attaching 4 probes... 00:20:53.911 @path[10.0.0.2, 4420]: 20263 00:20:53.911 @path[10.0.0.2, 4420]: 20456 00:20:53.911 @path[10.0.0.2, 4420]: 20347 00:20:53.911 @path[10.0.0.2, 4420]: 20373 00:20:53.911 @path[10.0.0.2, 4420]: 20464 00:20:53.911 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:53.911 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94466 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94597 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94158 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:53.912 15:43:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:00.505 15:43:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:00.505 15:43:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:00.505 Attaching 4 probes... 00:21:00.505 @path[10.0.0.2, 4421]: 13841 00:21:00.505 @path[10.0.0.2, 4421]: 19866 00:21:00.505 @path[10.0.0.2, 4421]: 20398 00:21:00.505 @path[10.0.0.2, 4421]: 20111 00:21:00.505 @path[10.0.0.2, 4421]: 20001 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94597 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:00.505 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:00.763 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:00.763 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94158 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:00.763 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94727 00:21:00.763 15:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.314 Attaching 4 probes... 00:21:07.314 00:21:07.314 00:21:07.314 00:21:07.314 00:21:07.314 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94727 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:07.314 15:44:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:07.315 15:44:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:07.315 15:44:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:07.315 15:44:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94158 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:07.315 15:44:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94858 00:21:07.315 15:44:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:13.879 Attaching 4 probes... 00:21:13.879 @path[10.0.0.2, 4421]: 19500 00:21:13.879 @path[10.0.0.2, 4421]: 19794 00:21:13.879 @path[10.0.0.2, 4421]: 19767 00:21:13.879 @path[10.0.0.2, 4421]: 19833 00:21:13.879 @path[10.0.0.2, 4421]: 19829 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94858 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:13.879 [2024-07-15 15:44:08.907464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.907997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 [2024-07-15 15:44:08.908005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36310 is same with the state(5) to be set 00:21:13.879 15:44:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:14.815 15:44:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:14.815 15:44:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94988 00:21:14.815 15:44:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94158 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:14.815 15:44:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:21.376 15:44:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:21.376 15:44:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.376 Attaching 4 probes... 00:21:21.376 @path[10.0.0.2, 4420]: 19587 00:21:21.376 @path[10.0.0.2, 4420]: 20009 00:21:21.376 @path[10.0.0.2, 4420]: 19823 00:21:21.376 @path[10.0.0.2, 4420]: 19732 00:21:21.376 @path[10.0.0.2, 4420]: 19870 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94988 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:21.376 [2024-07-15 15:44:16.419822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:21.376 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:21.635 15:44:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:28.200 15:44:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:28.200 15:44:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95186 00:21:28.200 15:44:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94158 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:28.200 15:44:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:34.780 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:34.780 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:34.780 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:34.780 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.780 Attaching 4 probes... 00:21:34.780 @path[10.0.0.2, 4421]: 19036 00:21:34.780 @path[10.0.0.2, 4421]: 19366 00:21:34.780 @path[10.0.0.2, 4421]: 19484 00:21:34.780 @path[10.0.0.2, 4421]: 19422 00:21:34.780 @path[10.0.0.2, 4421]: 19659 00:21:34.780 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:34.780 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:34.780 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:34.780 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95186 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94262 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94262 ']' 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94262 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94262 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:34.781 killing process with pid 94262 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94262' 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94262 00:21:34.781 15:44:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94262 00:21:34.781 Connection closed with partial response: 00:21:34.781 00:21:34.781 00:21:34.781 15:44:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94262 00:21:34.781 15:44:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:34.781 [2024-07-15 15:43:32.605584] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:21:34.781 [2024-07-15 15:43:32.605676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94262 ] 00:21:34.781 [2024-07-15 15:43:32.741801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.781 [2024-07-15 15:43:32.809744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.781 Running I/O for 90 seconds... 00:21:34.781 [2024-07-15 15:43:42.104830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.104938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.105018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.105036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.105055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.105069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.105087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.105099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.105116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.105128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.105146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.105158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.105175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.105187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.105205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.105217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.106978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.106993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.107014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.781 [2024-07-15 15:43:42.107028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:34.781 [2024-07-15 15:43:42.107050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 15:43:42.107308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 15:43:42.107340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 15:43:42.107370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 15:43:42.107400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 15:43:42.107430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 15:43:42.107461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 15:43:42.107493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 15:43:42.107523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.107980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.107999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.108012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.108031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.108060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.108079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.108091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.108110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.108122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.108147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.108161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.108180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.108193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.108976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.782 [2024-07-15 15:43:42.109386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.782 [2024-07-15 15:43:42.109422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:34.782 [2024-07-15 15:43:42.109442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 15:43:42.109455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 15:43:42.109485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 15:43:42.109517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 15:43:42.109581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 15:43:42.109632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.783 [2024-07-15 15:43:42.109668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.109703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.109739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.109779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.109822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.109859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.109895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.109958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.109977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.109990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.110976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.110998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.783 [2024-07-15 15:43:42.111012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:34.783 [2024-07-15 15:43:42.111033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:42.111068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:42.111103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:42.111163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:42.111231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:42.111264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:42.111297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:42.111329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:42.111360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:42.111391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:42.111404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.592762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.592827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.592900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.592950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.592971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.592985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.593974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.593995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.594030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.594077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.594110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.594143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.594176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.594209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.594242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.594274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:34.784 [2024-07-15 15:43:48.594318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-07-15 15:43:48.594331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.594968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.594992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.595455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.595474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.596135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.596178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.596228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.596264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.596301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.785 [2024-07-15 15:43:48.596337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.785 [2024-07-15 15:43:48.596373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.785 [2024-07-15 15:43:48.596410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.785 [2024-07-15 15:43:48.596446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.785 [2024-07-15 15:43:48.596482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.785 [2024-07-15 15:43:48.596505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.596518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.596572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.596606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.596645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.596661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.596687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.786 [2024-07-15 15:43:48.596703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.596729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.596744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.596769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.596784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.596809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.596824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.596850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.596864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.596904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.596932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.596970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.596983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.597964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.597978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.786 [2024-07-15 15:43:48.598455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:34.786 [2024-07-15 15:43:48.598481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.787 [2024-07-15 15:43:48.598952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.598981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.598997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.599025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.599040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.599068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.599083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.599140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.599155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.599195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.599208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.599234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.599247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.599273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.599286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.599312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.599325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.599351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.599364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:48.599407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:48.599421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.618077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.618130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.618193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.618229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.618250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.618264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.618281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.618294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.619671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.619698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.619726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.619741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.619765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.619779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.619802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.619816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.619838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.619853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.619900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.619928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.619977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.619989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:34.787 [2024-07-15 15:43:55.620371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.787 [2024-07-15 15:43:55.620383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.620415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.620448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.620480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.620518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.620587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.620639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.620678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.620715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.620753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.620791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.620829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.620883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.620962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.620981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.620993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.621683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.621976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.788 [2024-07-15 15:43:55.622015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.622043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.622057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.622081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.622111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.622136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.622149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.622189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.622204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.622228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.622242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.622267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.622280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.622305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.622330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.622357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.622371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:34.788 [2024-07-15 15:43:55.622409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.788 [2024-07-15 15:43:55.622423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.622984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.622999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.623966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.623979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.624003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.624016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.624046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.624061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:34.789 [2024-07-15 15:43:55.624086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.789 [2024-07-15 15:43:55.624099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:43:55.624123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:43:55.624136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:43:55.624161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:43:55.624174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:43:55.624199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:43:55.624212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.790 [2024-07-15 15:44:08.908605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.790 [2024-07-15 15:44:08.908650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.790 [2024-07-15 15:44:08.908680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.790 [2024-07-15 15:44:08.908708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.908737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.908765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.908793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.908839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.908884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.908940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.908966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.908979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.908991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.790 [2024-07-15 15:44:08.909782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.790 [2024-07-15 15:44:08.909796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.909811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.791 [2024-07-15 15:44:08.909824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.909839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.791 [2024-07-15 15:44:08.909853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.909868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.909881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.909911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.909938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.909952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.909964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.909978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.909996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.910975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.910988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.911004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.791 [2024-07-15 15:44:08.911018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.791 [2024-07-15 15:44:08.911033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.792 [2024-07-15 15:44:08.911448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.911967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.911993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.792 [2024-07-15 15:44:08.912355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.792 [2024-07-15 15:44:08.912366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.793 [2024-07-15 15:44:08.912379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.793 [2024-07-15 15:44:08.912391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.793 [2024-07-15 15:44:08.912420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.793 [2024-07-15 15:44:08.912433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65800 len:8 PRP1 0x0 PRP2 0x0 00:21:34.793 [2024-07-15 15:44:08.912445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.793 [2024-07-15 15:44:08.912461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.793 [2024-07-15 15:44:08.912470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.793 [2024-07-15 15:44:08.912481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65808 len:8 PRP1 0x0 PRP2 0x0 00:21:34.793 [2024-07-15 15:44:08.912492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.793 [2024-07-15 15:44:08.912550] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe76500 was disconnected and freed. reset controller. 00:21:34.793 [2024-07-15 15:44:08.912663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.793 [2024-07-15 15:44:08.912686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.793 [2024-07-15 15:44:08.912700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.793 [2024-07-15 15:44:08.912712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.793 [2024-07-15 15:44:08.912725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.793 [2024-07-15 15:44:08.912736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.793 [2024-07-15 15:44:08.912748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.793 [2024-07-15 15:44:08.912760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.793 [2024-07-15 15:44:08.912772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10424d0 is same with the state(5) to be set 00:21:34.793 [2024-07-15 15:44:08.914118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.793 [2024-07-15 15:44:08.914153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10424d0 (9): Bad file descriptor 00:21:34.793 [2024-07-15 15:44:08.914276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.793 [2024-07-15 15:44:08.914302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10424d0 with addr=10.0.0.2, port=4421 00:21:34.793 [2024-07-15 15:44:08.914317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10424d0 is same with the state(5) to be set 00:21:34.793 [2024-07-15 15:44:08.914338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10424d0 (9): Bad file descriptor 00:21:34.793 [2024-07-15 15:44:08.914357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.793 [2024-07-15 15:44:08.914369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.793 [2024-07-15 15:44:08.914382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.793 [2024-07-15 15:44:08.914403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.793 [2024-07-15 15:44:08.914416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.793 [2024-07-15 15:44:18.971299] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.793 Received shutdown signal, test time was about 55.031545 seconds 00:21:34.793 00:21:34.793 Latency(us) 00:21:34.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.793 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.793 Verification LBA range: start 0x0 length 0x4000 00:21:34.793 Nvme0n1 : 55.03 8465.79 33.07 0.00 0.00 15091.86 603.23 7046430.72 00:21:34.793 =================================================================================================================== 00:21:34.793 Total : 8465.79 33.07 0.00 0.00 15091.86 603.23 7046430.72 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.793 rmmod nvme_tcp 00:21:34.793 rmmod nvme_fabrics 00:21:34.793 rmmod nvme_keyring 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94158 ']' 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94158 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94158 ']' 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94158 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94158 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:34.793 killing process with pid 94158 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94158' 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94158 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94158 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:34.793 00:21:34.793 real 1m0.147s 00:21:34.793 user 2m50.323s 00:21:34.793 sys 0m13.125s 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.793 ************************************ 00:21:34.793 END TEST nvmf_host_multipath 00:21:34.793 ************************************ 00:21:34.793 15:44:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:34.793 15:44:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:34.793 15:44:29 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:34.793 15:44:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:34.793 15:44:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.793 15:44:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.793 ************************************ 00:21:34.793 START TEST nvmf_timeout 00:21:34.793 ************************************ 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:34.793 * Looking for test storage... 00:21:34.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.793 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:34.794 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:35.052 Cannot find device "nvmf_tgt_br" 00:21:35.052 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:21:35.052 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:35.052 Cannot find device "nvmf_tgt_br2" 00:21:35.053 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:21:35.053 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:35.053 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:35.053 Cannot find device "nvmf_tgt_br" 00:21:35.053 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:35.053 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:35.053 Cannot find device "nvmf_tgt_br2" 00:21:35.053 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:35.053 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:35.053 15:44:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:35.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:35.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:35.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:21:35.053 00:21:35.053 --- 10.0.0.2 ping statistics --- 00:21:35.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.053 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:35.053 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:35.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:35.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:35.053 00:21:35.053 --- 10.0.0.3 ping statistics --- 00:21:35.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.053 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:35.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:35.311 00:21:35.311 --- 10.0.0.1 ping statistics --- 00:21:35.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.311 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95499 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95499 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95499 ']' 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.311 15:44:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:35.311 [2024-07-15 15:44:30.274952] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:21:35.311 [2024-07-15 15:44:30.275041] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.311 [2024-07-15 15:44:30.414595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:35.570 [2024-07-15 15:44:30.484458] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.570 [2024-07-15 15:44:30.484516] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.570 [2024-07-15 15:44:30.484543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.570 [2024-07-15 15:44:30.484554] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.570 [2024-07-15 15:44:30.484562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.570 [2024-07-15 15:44:30.484734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.570 [2024-07-15 15:44:30.484748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.136 15:44:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.136 15:44:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:36.136 15:44:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.136 15:44:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.136 15:44:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:36.394 15:44:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.394 15:44:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:36.394 15:44:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:36.394 [2024-07-15 15:44:31.482345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.394 15:44:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:36.960 Malloc0 00:21:36.960 15:44:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.960 15:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.217 15:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.475 [2024-07-15 15:44:32.474666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.475 15:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:37.475 15:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=95590 00:21:37.475 15:44:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 95590 /var/tmp/bdevperf.sock 00:21:37.475 15:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95590 ']' 00:21:37.475 15:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.475 15:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.475 15:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.475 15:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.475 15:44:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.475 [2024-07-15 15:44:32.532744] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:21:37.475 [2024-07-15 15:44:32.532834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95590 ] 00:21:37.733 [2024-07-15 15:44:32.662953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.733 [2024-07-15 15:44:32.717346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.357 15:44:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.357 15:44:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:38.357 15:44:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:38.615 15:44:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:39.182 NVMe0n1 00:21:39.182 15:44:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=95638 00:21:39.182 15:44:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:39.182 15:44:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.182 Running I/O for 10 seconds... 00:21:40.118 15:44:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.379 [2024-07-15 15:44:35.278303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.379 [2024-07-15 15:44:35.278367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.379 [2024-07-15 15:44:35.278393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.379 [2024-07-15 15:44:35.278400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.379 [2024-07-15 15:44:35.278407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.379 [2024-07-15 15:44:35.278414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.379 [2024-07-15 15:44:35.278421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.379 [2024-07-15 15:44:35.278429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.379 [2024-07-15 15:44:35.278435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.278649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9900 is same with the state(5) to be set 00:21:40.380 [2024-07-15 15:44:35.279035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.380 [2024-07-15 15:44:35.279413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.380 [2024-07-15 15:44:35.279434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.380 [2024-07-15 15:44:35.279453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.380 [2024-07-15 15:44:35.279471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.380 [2024-07-15 15:44:35.279490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.380 [2024-07-15 15:44:35.279508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.380 [2024-07-15 15:44:35.279527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.380 [2024-07-15 15:44:35.279562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.380 [2024-07-15 15:44:35.279761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.380 [2024-07-15 15:44:35.279772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.279982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.279993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.381 [2024-07-15 15:44:35.280657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.381 [2024-07-15 15:44:35.280667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.280981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.280992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.382 [2024-07-15 15:44:35.281387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.382 [2024-07-15 15:44:35.281408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.382 [2024-07-15 15:44:35.281428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.382 [2024-07-15 15:44:35.281449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.382 [2024-07-15 15:44:35.281469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.382 [2024-07-15 15:44:35.281490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.382 [2024-07-15 15:44:35.281510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.382 [2024-07-15 15:44:35.281542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.382 [2024-07-15 15:44:35.281554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.383 [2024-07-15 15:44:35.281564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.383 [2024-07-15 15:44:35.281584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.383 [2024-07-15 15:44:35.281604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.383 [2024-07-15 15:44:35.281624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.383 [2024-07-15 15:44:35.281644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.383 [2024-07-15 15:44:35.281664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.383 [2024-07-15 15:44:35.281684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.383 [2024-07-15 15:44:35.281707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:40.383 [2024-07-15 15:44:35.281728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:40.383 [2024-07-15 15:44:35.281770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:40.383 [2024-07-15 15:44:35.281778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93416 len:8 PRP1 0x0 PRP2 0x0 00:21:40.383 [2024-07-15 15:44:35.281787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.383 [2024-07-15 15:44:35.281829] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe148d0 was disconnected and freed. reset controller. 00:21:40.383 [2024-07-15 15:44:35.282081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:40.383 [2024-07-15 15:44:35.282160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7240 (9): Bad file descriptor 00:21:40.383 [2024-07-15 15:44:35.282270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.383 [2024-07-15 15:44:35.282291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda7240 with addr=10.0.0.2, port=4420 00:21:40.383 [2024-07-15 15:44:35.282302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda7240 is same with the state(5) to be set 00:21:40.383 [2024-07-15 15:44:35.282320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7240 (9): Bad file descriptor 00:21:40.383 [2024-07-15 15:44:35.282337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:40.383 [2024-07-15 15:44:35.282346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:40.383 [2024-07-15 15:44:35.282356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:40.383 [2024-07-15 15:44:35.282375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.383 [2024-07-15 15:44:35.282386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:40.383 15:44:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:42.306 [2024-07-15 15:44:37.282493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.306 [2024-07-15 15:44:37.282580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda7240 with addr=10.0.0.2, port=4420 00:21:42.306 [2024-07-15 15:44:37.282595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda7240 is same with the state(5) to be set 00:21:42.306 [2024-07-15 15:44:37.282616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7240 (9): Bad file descriptor 00:21:42.306 [2024-07-15 15:44:37.282642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:42.306 [2024-07-15 15:44:37.282652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:42.306 [2024-07-15 15:44:37.282661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.306 [2024-07-15 15:44:37.282684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.306 [2024-07-15 15:44:37.282694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.306 15:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:42.306 15:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:42.306 15:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:42.564 15:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:42.564 15:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:42.564 15:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:42.564 15:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:42.822 15:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:42.822 15:44:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:44.199 [2024-07-15 15:44:39.282843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-07-15 15:44:39.282926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda7240 with addr=10.0.0.2, port=4420 00:21:44.199 [2024-07-15 15:44:39.282943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda7240 is same with the state(5) to be set 00:21:44.199 [2024-07-15 15:44:39.282964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda7240 (9): Bad file descriptor 00:21:44.199 [2024-07-15 15:44:39.282981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:44.199 [2024-07-15 15:44:39.282990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:44.199 [2024-07-15 15:44:39.283000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:44.199 [2024-07-15 15:44:39.283024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:44.199 [2024-07-15 15:44:39.283035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.733 [2024-07-15 15:44:41.283176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.733 [2024-07-15 15:44:41.283262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.734 [2024-07-15 15:44:41.283289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.734 [2024-07-15 15:44:41.283298] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:46.734 [2024-07-15 15:44:41.283320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.301 00:21:47.301 Latency(us) 00:21:47.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.301 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:47.301 Verification LBA range: start 0x0 length 0x4000 00:21:47.301 NVMe0n1 : 8.13 1420.42 5.55 15.74 0.00 89011.16 1765.00 7015926.69 00:21:47.301 =================================================================================================================== 00:21:47.301 Total : 1420.42 5.55 15.74 0.00 89011.16 1765.00 7015926.69 00:21:47.301 0 00:21:47.869 15:44:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:47.869 15:44:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.869 15:44:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:48.128 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:48.128 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:48.128 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:48.128 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:48.386 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:48.386 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 95638 00:21:48.386 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 95590 00:21:48.386 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95590 ']' 00:21:48.386 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95590 00:21:48.386 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:48.386 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.386 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95590 00:21:48.386 killing process with pid 95590 00:21:48.386 Received shutdown signal, test time was about 9.178811 seconds 00:21:48.386 00:21:48.386 Latency(us) 00:21:48.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.387 =================================================================================================================== 00:21:48.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.387 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:48.387 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:48.387 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95590' 00:21:48.387 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95590 00:21:48.387 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95590 00:21:48.387 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.645 [2024-07-15 15:44:43.654116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.645 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:48.645 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=95790 00:21:48.645 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 95790 /var/tmp/bdevperf.sock 00:21:48.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.645 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95790 ']' 00:21:48.645 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.645 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.645 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.645 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.645 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:48.645 [2024-07-15 15:44:43.718210] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:21:48.645 [2024-07-15 15:44:43.718463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95790 ] 00:21:48.904 [2024-07-15 15:44:43.850322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.904 [2024-07-15 15:44:43.901035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.904 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.904 15:44:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:48.904 15:44:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:49.168 15:44:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:49.458 NVMe0n1 00:21:49.458 15:44:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=95823 00:21:49.458 15:44:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:49.458 15:44:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:49.737 Running I/O for 10 seconds... 00:21:50.684 15:44:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.684 [2024-07-15 15:44:45.717402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.684 [2024-07-15 15:44:45.717597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.717699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7b50 is same with the state(5) to be set 00:21:50.685 [2024-07-15 15:44:45.718294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.685 [2024-07-15 15:44:45.718351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.685 [2024-07-15 15:44:45.718386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.685 [2024-07-15 15:44:45.718407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.685 [2024-07-15 15:44:45.718428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.685 [2024-07-15 15:44:45.718450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.685 [2024-07-15 15:44:45.718470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.685 [2024-07-15 15:44:45.718491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.685 [2024-07-15 15:44:45.718511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.685 [2024-07-15 15:44:45.718548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.718980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.718989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.719001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.719011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.719022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.719032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.719043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.719053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.719064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.719074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.719085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.719095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.719107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.719117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.719128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.719138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.685 [2024-07-15 15:44:45.719149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.685 [2024-07-15 15:44:45.719159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.686 [2024-07-15 15:44:45.719641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.719986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.719996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.720007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.720016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.720027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.720037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.720048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.720057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.720068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.720078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.720089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.720098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.686 [2024-07-15 15:44:45.720109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.686 [2024-07-15 15:44:45.720118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.687 [2024-07-15 15:44:45.720860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.687 [2024-07-15 15:44:45.720880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.687 [2024-07-15 15:44:45.720901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.687 [2024-07-15 15:44:45.720922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.687 [2024-07-15 15:44:45.720943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.720973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.687 [2024-07-15 15:44:45.720985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93576 len:8 PRP1 0x0 PRP2 0x0 00:21:50.687 [2024-07-15 15:44:45.720995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.721066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.687 [2024-07-15 15:44:45.721082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.721093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.687 [2024-07-15 15:44:45.721103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.721113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.687 [2024-07-15 15:44:45.721122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.687 [2024-07-15 15:44:45.721132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.688 [2024-07-15 15:44:45.721142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2400240 is same with the state(5) to be set 00:21:50.688 [2024-07-15 15:44:45.721405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93584 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93592 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93600 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93608 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93616 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93120 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93128 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93136 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93144 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93152 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92600 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92608 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92616 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92624 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92632 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.721967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.721977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.721984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.721992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92640 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.722001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.722011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.722018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.722026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92648 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.722035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.722044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.722053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.722062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92656 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.722071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.722081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.722088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.722097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92664 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.722106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.722115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.722123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.722131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93160 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.722142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.722151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.722159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.722167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93168 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.722176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.722186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.722193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.688 [2024-07-15 15:44:45.722201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93176 len:8 PRP1 0x0 PRP2 0x0 00:21:50.688 [2024-07-15 15:44:45.722210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.688 [2024-07-15 15:44:45.722220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.688 [2024-07-15 15:44:45.722227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.722235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93184 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.722244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.722254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.722261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.722269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93192 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.722278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.722288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.722295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.722304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93200 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.722313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.722322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93208 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93216 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93224 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93232 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93240 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93248 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93256 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93264 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93272 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93280 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.735967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.735977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.735988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93288 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.735999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.736021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.736031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93296 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.736043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.736065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.736075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93304 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.736087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.736109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.736119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93312 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.736131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.736153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.736163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93320 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.736175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.736196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.736206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93328 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.736218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.736240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.736250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93336 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.736262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.736285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.736295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93344 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.736307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.736329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.736339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93352 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.736351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.689 [2024-07-15 15:44:45.736372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.689 [2024-07-15 15:44:45.736383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93360 len:8 PRP1 0x0 PRP2 0x0 00:21:50.689 [2024-07-15 15:44:45.736394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.689 [2024-07-15 15:44:45.736407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93368 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93376 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93384 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93392 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93400 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93408 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93416 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93424 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93432 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93440 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93448 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93456 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.736957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.736967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93464 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.736978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.736990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93472 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93480 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93488 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93496 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93504 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93512 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93520 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93528 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93536 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92672 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92680 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.690 [2024-07-15 15:44:45.737484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.690 [2024-07-15 15:44:45.737494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92688 len:8 PRP1 0x0 PRP2 0x0 00:21:50.690 [2024-07-15 15:44:45.737505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.690 [2024-07-15 15:44:45.737517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92696 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92704 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92712 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92720 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92728 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92736 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92744 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92752 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92760 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.737958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92768 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.737970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.737982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.737991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92776 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92784 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92792 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92800 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92808 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92816 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92824 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92832 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92840 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92848 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92856 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92864 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92872 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92880 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.691 [2024-07-15 15:44:45.738650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92888 len:8 PRP1 0x0 PRP2 0x0 00:21:50.691 [2024-07-15 15:44:45.738662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.691 [2024-07-15 15:44:45.738675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.691 [2024-07-15 15:44:45.738684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.738694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92896 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.738708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.738721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.738731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.738741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92904 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.738754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.738766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.738776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.738787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92912 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.738798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.738810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.738820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.738830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92920 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.738842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.738854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.738879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.738892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92928 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.738904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.738916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.738926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.738937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92936 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.738949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.738961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.738970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.738981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92944 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.738992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.739004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.739013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.739024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92952 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.739035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.739047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.739056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.739067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92960 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.739080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.739093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.739102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.739112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92968 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.739125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.739137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 15:44:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:50.692 [2024-07-15 15:44:45.746125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92984 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92992 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93000 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93008 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93016 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93024 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93032 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93040 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93048 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93056 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93064 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93072 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93080 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.692 [2024-07-15 15:44:45.746954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93088 len:8 PRP1 0x0 PRP2 0x0 00:21:50.692 [2024-07-15 15:44:45.746967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.692 [2024-07-15 15:44:45.746981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.692 [2024-07-15 15:44:45.746991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.693 [2024-07-15 15:44:45.747002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93096 len:8 PRP1 0x0 PRP2 0x0 00:21:50.693 [2024-07-15 15:44:45.747015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.693 [2024-07-15 15:44:45.747029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.693 [2024-07-15 15:44:45.747039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.693 [2024-07-15 15:44:45.747050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93104 len:8 PRP1 0x0 PRP2 0x0 00:21:50.693 [2024-07-15 15:44:45.747063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.693 [2024-07-15 15:44:45.747076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.693 [2024-07-15 15:44:45.747087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.693 [2024-07-15 15:44:45.747099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93112 len:8 PRP1 0x0 PRP2 0x0 00:21:50.693 [2024-07-15 15:44:45.747112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.693 [2024-07-15 15:44:45.747125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.693 [2024-07-15 15:44:45.747136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.693 [2024-07-15 15:44:45.747147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93544 len:8 PRP1 0x0 PRP2 0x0 00:21:50.693 [2024-07-15 15:44:45.747160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.693 [2024-07-15 15:44:45.747173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.693 [2024-07-15 15:44:45.747195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.693 [2024-07-15 15:44:45.747206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93552 len:8 PRP1 0x0 PRP2 0x0 00:21:50.693 [2024-07-15 15:44:45.747220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.693 [2024-07-15 15:44:45.747241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.693 [2024-07-15 15:44:45.747252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.693 [2024-07-15 15:44:45.747263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93560 len:8 PRP1 0x0 PRP2 0x0 00:21:50.693 [2024-07-15 15:44:45.747276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.693 [2024-07-15 15:44:45.747290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.693 [2024-07-15 15:44:45.747300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.693 [2024-07-15 15:44:45.747312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93568 len:8 PRP1 0x0 PRP2 0x0 00:21:50.693 [2024-07-15 15:44:45.747325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.693 [2024-07-15 15:44:45.747339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.693 [2024-07-15 15:44:45.747350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.693 [2024-07-15 15:44:45.747361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93576 len:8 PRP1 0x0 PRP2 0x0 00:21:50.693 [2024-07-15 15:44:45.747375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.693 [2024-07-15 15:44:45.747436] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x246d8d0 was disconnected and freed. reset controller. 00:21:50.693 [2024-07-15 15:44:45.747515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400240 (9): Bad file descriptor 00:21:50.693 [2024-07-15 15:44:45.747875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.693 [2024-07-15 15:44:45.748042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.693 [2024-07-15 15:44:45.748072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2400240 with addr=10.0.0.2, port=4420 00:21:50.693 [2024-07-15 15:44:45.748088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2400240 is same with the state(5) to be set 00:21:50.693 [2024-07-15 15:44:45.748114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400240 (9): Bad file descriptor 00:21:50.693 [2024-07-15 15:44:45.748136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.693 [2024-07-15 15:44:45.748149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.693 [2024-07-15 15:44:45.748163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.693 [2024-07-15 15:44:45.748189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.693 [2024-07-15 15:44:45.748204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:51.630 15:44:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.630 [2024-07-15 15:44:46.748282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.630 [2024-07-15 15:44:46.748351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2400240 with addr=10.0.0.2, port=4420 00:21:51.630 [2024-07-15 15:44:46.748364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2400240 is same with the state(5) to be set 00:21:51.630 [2024-07-15 15:44:46.748384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400240 (9): Bad file descriptor 00:21:51.630 [2024-07-15 15:44:46.748399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:51.630 [2024-07-15 15:44:46.748409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:51.630 [2024-07-15 15:44:46.748418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:51.630 [2024-07-15 15:44:46.748437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.630 [2024-07-15 15:44:46.748447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:51.889 [2024-07-15 15:44:46.979370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.889 15:44:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 95823 00:21:52.825 [2024-07-15 15:44:47.759418] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:00.946 00:22:00.947 Latency(us) 00:22:00.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.947 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:00.947 Verification LBA range: start 0x0 length 0x4000 00:22:00.947 NVMe0n1 : 10.01 7375.36 28.81 0.00 0.00 17331.43 1653.29 3050402.91 00:22:00.947 =================================================================================================================== 00:22:00.947 Total : 7375.36 28.81 0.00 0.00 17331.43 1653.29 3050402.91 00:22:00.947 0 00:22:00.947 15:44:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=95941 00:22:00.947 15:44:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:00.947 15:44:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:00.947 Running I/O for 10 seconds... 00:22:00.947 15:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.947 [2024-07-15 15:44:55.870628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.870994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.947 [2024-07-15 15:44:55.871392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.871784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x750660 is same with the state(5) to be set 00:22:00.948 [2024-07-15 15:44:55.873394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.948 [2024-07-15 15:44:55.873927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.948 [2024-07-15 15:44:55.873937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.873948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.873958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.873969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.873979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.873991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.949 [2024-07-15 15:44:55.874776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.949 [2024-07-15 15:44:55.874798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.949 [2024-07-15 15:44:55.874819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.949 [2024-07-15 15:44:55.874840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.949 [2024-07-15 15:44:55.874861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.949 [2024-07-15 15:44:55.874872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.949 [2024-07-15 15:44:55.874894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.874905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.874917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.874930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.874940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.874952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.874961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.874973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.874982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.874994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.950 [2024-07-15 15:44:55.875799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.950 [2024-07-15 15:44:55.875811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.875820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.875832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.875842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.875853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.875863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.875874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.875884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.875895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.875905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.875916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.875926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.875937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.875946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.875973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.875982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.875993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.876002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.876023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.951 [2024-07-15 15:44:55.876043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95872 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95880 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95888 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95896 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95904 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95912 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95920 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95928 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95936 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95944 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.876447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.951 [2024-07-15 15:44:55.876455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.951 [2024-07-15 15:44:55.876462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95952 len:8 PRP1 0x0 PRP2 0x0 00:22:00.951 [2024-07-15 15:44:55.876471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 15:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:00.951 [2024-07-15 15:44:55.893031] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2480340 was disconnected and freed. reset controller. 00:22:00.951 [2024-07-15 15:44:55.893145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.951 [2024-07-15 15:44:55.893162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.893173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.951 [2024-07-15 15:44:55.893181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.893190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.951 [2024-07-15 15:44:55.893199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.951 [2024-07-15 15:44:55.893207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.952 [2024-07-15 15:44:55.893216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.952 [2024-07-15 15:44:55.893224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2400240 is same with the state(5) to be set 00:22:00.952 [2024-07-15 15:44:55.893505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.952 [2024-07-15 15:44:55.893589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400240 (9): Bad file descriptor 00:22:00.952 [2024-07-15 15:44:55.893688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.952 [2024-07-15 15:44:55.893710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2400240 with addr=10.0.0.2, port=4420 00:22:00.952 [2024-07-15 15:44:55.893722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2400240 is same with the state(5) to be set 00:22:00.952 [2024-07-15 15:44:55.893740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400240 (9): Bad file descriptor 00:22:00.952 [2024-07-15 15:44:55.893756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.952 [2024-07-15 15:44:55.893766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.952 [2024-07-15 15:44:55.893776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.952 [2024-07-15 15:44:55.893796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.952 [2024-07-15 15:44:55.893807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.887 [2024-07-15 15:44:56.893889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.887 [2024-07-15 15:44:56.893946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2400240 with addr=10.0.0.2, port=4420 00:22:01.887 [2024-07-15 15:44:56.893975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2400240 is same with the state(5) to be set 00:22:01.887 [2024-07-15 15:44:56.893994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400240 (9): Bad file descriptor 00:22:01.887 [2024-07-15 15:44:56.894010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.887 [2024-07-15 15:44:56.894019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.887 [2024-07-15 15:44:56.894027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.887 [2024-07-15 15:44:56.894048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.887 [2024-07-15 15:44:56.894059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.878 [2024-07-15 15:44:57.894124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.878 [2024-07-15 15:44:57.894194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2400240 with addr=10.0.0.2, port=4420 00:22:02.878 [2024-07-15 15:44:57.894207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2400240 is same with the state(5) to be set 00:22:02.878 [2024-07-15 15:44:57.894226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400240 (9): Bad file descriptor 00:22:02.878 [2024-07-15 15:44:57.894241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.878 [2024-07-15 15:44:57.894249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.878 [2024-07-15 15:44:57.894258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.878 [2024-07-15 15:44:57.894277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.878 [2024-07-15 15:44:57.894287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.812 15:44:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.812 [2024-07-15 15:44:58.897873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.812 [2024-07-15 15:44:58.897955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2400240 with addr=10.0.0.2, port=4420 00:22:03.812 [2024-07-15 15:44:58.897983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2400240 is same with the state(5) to be set 00:22:03.812 [2024-07-15 15:44:58.898254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400240 (9): Bad file descriptor 00:22:03.812 [2024-07-15 15:44:58.898507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.812 [2024-07-15 15:44:58.898555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.812 [2024-07-15 15:44:58.898566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.812 [2024-07-15 15:44:58.902376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.812 [2024-07-15 15:44:58.902422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.071 [2024-07-15 15:44:59.127401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.071 15:44:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 95941 00:22:05.006 [2024-07-15 15:44:59.940192] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:10.272 00:22:10.272 Latency(us) 00:22:10.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.272 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:10.272 Verification LBA range: start 0x0 length 0x4000 00:22:10.272 NVMe0n1 : 10.01 6015.81 23.50 4209.60 0.00 12490.00 729.83 3035150.89 00:22:10.272 =================================================================================================================== 00:22:10.272 Total : 6015.81 23.50 4209.60 0.00 12490.00 0.00 3035150.89 00:22:10.272 0 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 95790 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95790 ']' 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95790 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95790 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:10.272 killing process with pid 95790 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95790' 00:22:10.272 Received shutdown signal, test time was about 10.000000 seconds 00:22:10.272 00:22:10.272 Latency(us) 00:22:10.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.272 =================================================================================================================== 00:22:10.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95790 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95790 00:22:10.272 15:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96062 00:22:10.273 15:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:10.273 15:45:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96062 /var/tmp/bdevperf.sock 00:22:10.273 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96062 ']' 00:22:10.273 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.273 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:10.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.273 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.273 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:10.273 15:45:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:10.273 [2024-07-15 15:45:05.001301] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:22:10.273 [2024-07-15 15:45:05.001406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96062 ] 00:22:10.273 [2024-07-15 15:45:05.135027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.273 [2024-07-15 15:45:05.192386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.840 15:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.840 15:45:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:10.840 15:45:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96062 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:10.840 15:45:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96091 00:22:10.840 15:45:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:11.098 15:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:11.359 NVMe0n1 00:22:11.359 15:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96149 00:22:11.359 15:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.359 15:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:11.618 Running I/O for 10 seconds... 00:22:12.551 15:45:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.810 [2024-07-15 15:45:07.739501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.739995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.740003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753e00 is same with the state(5) to be set 00:22:12.810 [2024-07-15 15:45:07.740383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.810 [2024-07-15 15:45:07.740784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.810 [2024-07-15 15:45:07.740795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.740807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.740816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.740827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.740836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.740847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.740857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.740868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.740892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.740904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.740912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.740923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.740932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.740943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.740952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.740972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.740981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.740992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.811 [2024-07-15 15:45:07.741856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.811 [2024-07-15 15:45:07.741867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.741891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.741902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.741911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.741922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.741931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.741942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.741951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.741962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.741971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.741982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.741991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.812 [2024-07-15 15:45:07.742932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.812 [2024-07-15 15:45:07.742941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.742952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.813 [2024-07-15 15:45:07.742961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.742974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.813 [2024-07-15 15:45:07.742983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.742994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.813 [2024-07-15 15:45:07.743003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.743014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.813 [2024-07-15 15:45:07.743023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.743034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.813 [2024-07-15 15:45:07.743043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.743054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.813 [2024-07-15 15:45:07.743064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.743075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.813 [2024-07-15 15:45:07.743084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.743098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.813 [2024-07-15 15:45:07.743107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.743118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.813 [2024-07-15 15:45:07.743127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.743153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.813 [2024-07-15 15:45:07.743162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.813 [2024-07-15 15:45:07.743170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47296 len:8 PRP1 0x0 PRP2 0x0 00:22:12.813 [2024-07-15 15:45:07.743179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.813 [2024-07-15 15:45:07.743221] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fcc8d0 was disconnected and freed. reset controller. 00:22:12.813 [2024-07-15 15:45:07.743538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.813 [2024-07-15 15:45:07.743615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5f240 (9): Bad file descriptor 00:22:12.813 [2024-07-15 15:45:07.743725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.813 [2024-07-15 15:45:07.743747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5f240 with addr=10.0.0.2, port=4420 00:22:12.813 [2024-07-15 15:45:07.743758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f240 is same with the state(5) to be set 00:22:12.813 [2024-07-15 15:45:07.743776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5f240 (9): Bad file descriptor 00:22:12.813 [2024-07-15 15:45:07.743792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:12.813 [2024-07-15 15:45:07.743802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:12.813 [2024-07-15 15:45:07.743812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.813 [2024-07-15 15:45:07.743831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.813 [2024-07-15 15:45:07.743842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.813 15:45:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96149 00:22:14.735 [2024-07-15 15:45:09.743987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.735 [2024-07-15 15:45:09.744045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5f240 with addr=10.0.0.2, port=4420 00:22:14.735 [2024-07-15 15:45:09.744060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f240 is same with the state(5) to be set 00:22:14.735 [2024-07-15 15:45:09.744080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5f240 (9): Bad file descriptor 00:22:14.735 [2024-07-15 15:45:09.744108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.735 [2024-07-15 15:45:09.744118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:14.735 [2024-07-15 15:45:09.744127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.735 [2024-07-15 15:45:09.744148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.735 [2024-07-15 15:45:09.744158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:16.634 [2024-07-15 15:45:11.744338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.634 [2024-07-15 15:45:11.744404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5f240 with addr=10.0.0.2, port=4420 00:22:16.634 [2024-07-15 15:45:11.744418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f240 is same with the state(5) to be set 00:22:16.634 [2024-07-15 15:45:11.744440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5f240 (9): Bad file descriptor 00:22:16.634 [2024-07-15 15:45:11.744457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.634 [2024-07-15 15:45:11.744466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:16.634 [2024-07-15 15:45:11.744476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.634 [2024-07-15 15:45:11.744500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:16.634 [2024-07-15 15:45:11.744511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.159 [2024-07-15 15:45:13.744601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.159 [2024-07-15 15:45:13.744635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:19.159 [2024-07-15 15:45:13.744646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:19.159 [2024-07-15 15:45:13.744655] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:19.159 [2024-07-15 15:45:13.744675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:19.725 00:22:19.725 Latency(us) 00:22:19.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.725 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:19.725 NVMe0n1 : 8.18 2959.81 11.56 15.66 0.00 42982.24 2144.81 7015926.69 00:22:19.725 =================================================================================================================== 00:22:19.726 Total : 2959.81 11.56 15.66 0.00 42982.24 2144.81 7015926.69 00:22:19.726 0 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:19.726 Attaching 5 probes... 00:22:19.726 1264.136561: reset bdev controller NVMe0 00:22:19.726 1264.277966: reconnect bdev controller NVMe0 00:22:19.726 3264.484186: reconnect delay bdev controller NVMe0 00:22:19.726 3264.516933: reconnect bdev controller NVMe0 00:22:19.726 5264.841712: reconnect delay bdev controller NVMe0 00:22:19.726 5264.874695: reconnect bdev controller NVMe0 00:22:19.726 7265.186541: reconnect delay bdev controller NVMe0 00:22:19.726 7265.201897: reconnect bdev controller NVMe0 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96091 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96062 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96062 ']' 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96062 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96062 00:22:19.726 killing process with pid 96062 00:22:19.726 Received shutdown signal, test time was about 8.231045 seconds 00:22:19.726 00:22:19.726 Latency(us) 00:22:19.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.726 =================================================================================================================== 00:22:19.726 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96062' 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96062 00:22:19.726 15:45:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96062 00:22:19.984 15:45:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.243 rmmod nvme_tcp 00:22:20.243 rmmod nvme_fabrics 00:22:20.243 rmmod nvme_keyring 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95499 ']' 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95499 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95499 ']' 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95499 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95499 00:22:20.243 killing process with pid 95499 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95499' 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95499 00:22:20.243 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95499 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:20.501 ************************************ 00:22:20.501 END TEST nvmf_timeout 00:22:20.501 ************************************ 00:22:20.501 00:22:20.501 real 0m45.752s 00:22:20.501 user 2m15.181s 00:22:20.501 sys 0m4.363s 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.501 15:45:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:20.501 15:45:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:20.501 15:45:15 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:22:20.501 15:45:15 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:22:20.501 15:45:15 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:20.501 15:45:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.501 15:45:15 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:22:20.501 00:22:20.501 real 14m59.234s 00:22:20.501 user 40m4.072s 00:22:20.501 sys 3m12.794s 00:22:20.501 15:45:15 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.501 15:45:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.501 ************************************ 00:22:20.501 END TEST nvmf_tcp 00:22:20.501 ************************************ 00:22:20.948 15:45:15 -- common/autotest_common.sh@1142 -- # return 0 00:22:20.948 15:45:15 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:22:20.948 15:45:15 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:20.948 15:45:15 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:20.948 15:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.948 15:45:15 -- common/autotest_common.sh@10 -- # set +x 00:22:20.948 ************************************ 00:22:20.948 START TEST spdkcli_nvmf_tcp 00:22:20.948 ************************************ 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:20.948 * Looking for test storage... 00:22:20.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.948 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96365 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96365 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 96365 ']' 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:20.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.949 15:45:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 [2024-07-15 15:45:15.833459] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:22:20.949 [2024-07-15 15:45:15.834192] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96365 ] 00:22:20.949 [2024-07-15 15:45:15.976302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:20.949 [2024-07-15 15:45:16.025085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.949 [2024-07-15 15:45:16.025093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:21.207 15:45:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:21.207 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:21.207 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:21.207 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:21.207 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:21.207 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:21.207 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:21.207 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:21.207 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:21.207 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:21.207 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:21.207 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:21.207 ' 00:22:23.736 [2024-07-15 15:45:18.781064] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.114 [2024-07-15 15:45:20.053960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:27.686 [2024-07-15 15:45:22.399395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:29.596 [2024-07-15 15:45:24.424670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:30.967 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:30.967 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:30.967 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:30.967 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:30.967 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:30.967 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:30.967 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:30.967 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:30.967 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:30.967 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:30.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:30.967 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:30.967 15:45:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:30.967 15:45:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.967 15:45:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.223 15:45:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:31.223 15:45:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.223 15:45:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.223 15:45:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:22:31.223 15:45:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:22:31.480 15:45:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:31.480 15:45:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:31.480 15:45:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:31.480 15:45:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:31.480 15:45:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.737 15:45:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:31.737 15:45:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.737 15:45:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.737 15:45:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:31.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:31.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:31.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:31.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:31.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:31.737 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:31.737 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:31.737 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:31.737 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:31.737 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:31.737 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:31.737 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:31.737 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:31.737 ' 00:22:37.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:37.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:37.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:37.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:37.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:37.004 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:37.004 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:37.004 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:37.004 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:37.004 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:37.004 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:37.004 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:37.004 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:37.004 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:37.004 15:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:37.004 15:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.004 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.004 15:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96365 00:22:37.004 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96365 ']' 00:22:37.004 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96365 00:22:37.004 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:22:37.004 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.004 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96365 00:22:37.004 killing process with pid 96365 00:22:37.004 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:37.004 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:37.005 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96365' 00:22:37.005 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 96365 00:22:37.005 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 96365 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96365 ']' 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96365 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96365 ']' 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96365 00:22:37.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96365) - No such process 00:22:37.265 Process with pid 96365 is not found 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 96365 is not found' 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:37.265 00:22:37.265 real 0m16.554s 00:22:37.265 user 0m35.806s 00:22:37.265 sys 0m0.829s 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:37.265 ************************************ 00:22:37.265 END TEST spdkcli_nvmf_tcp 00:22:37.265 ************************************ 00:22:37.265 15:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.265 15:45:32 -- common/autotest_common.sh@1142 -- # return 0 00:22:37.265 15:45:32 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:37.265 15:45:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:37.265 15:45:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.265 15:45:32 -- common/autotest_common.sh@10 -- # set +x 00:22:37.265 ************************************ 00:22:37.265 START TEST nvmf_identify_passthru 00:22:37.265 ************************************ 00:22:37.265 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:37.265 * Looking for test storage... 00:22:37.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:37.265 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:37.265 15:45:32 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.265 15:45:32 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.265 15:45:32 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.265 15:45:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.265 15:45:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.265 15:45:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.265 15:45:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:37.265 15:45:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.265 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.266 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:37.266 15:45:32 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.266 15:45:32 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.266 15:45:32 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.266 15:45:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.266 15:45:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.266 15:45:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.266 15:45:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:37.266 15:45:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.266 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.266 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:37.266 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:37.266 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:37.525 Cannot find device "nvmf_tgt_br" 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:37.525 Cannot find device "nvmf_tgt_br2" 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:37.525 Cannot find device "nvmf_tgt_br" 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:37.525 Cannot find device "nvmf_tgt_br2" 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:37.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:37.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:37.525 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:37.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:22:37.784 00:22:37.784 --- 10.0.0.2 ping statistics --- 00:22:37.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.784 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:37.784 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:37.784 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:22:37.784 00:22:37.784 --- 10.0.0.3 ping statistics --- 00:22:37.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.784 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:37.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:22:37.784 00:22:37.784 --- 10.0.0.1 ping statistics --- 00:22:37.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.784 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.784 15:45:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.784 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:37.784 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:37.784 15:45:32 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:22:37.784 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:22:37.784 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:22:37.784 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:37.784 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:37.784 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:38.042 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:22:38.042 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:38.042 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:38.042 15:45:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:38.042 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:22:38.042 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:22:38.042 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.042 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.042 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:22:38.042 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.042 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.042 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=96841 00:22:38.042 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:38.042 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.043 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 96841 00:22:38.043 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 96841 ']' 00:22:38.043 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.043 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.043 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.043 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.043 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.301 [2024-07-15 15:45:33.231236] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:22:38.301 [2024-07-15 15:45:33.231329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.301 [2024-07-15 15:45:33.368598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.301 [2024-07-15 15:45:33.421702] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.301 [2024-07-15 15:45:33.421764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.301 [2024-07-15 15:45:33.421774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.301 [2024-07-15 15:45:33.421781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.301 [2024-07-15 15:45:33.421788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.301 [2024-07-15 15:45:33.422139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.301 [2024-07-15 15:45:33.422359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.301 [2024-07-15 15:45:33.422407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.301 [2024-07-15 15:45:33.422413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:22:38.560 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.560 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.560 [2024-07-15 15:45:33.541504] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.560 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.560 [2024-07-15 15:45:33.554793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.560 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.560 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.560 Nvme0n1 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.560 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.560 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.560 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.560 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.818 [2024-07-15 15:45:33.691914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.818 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.818 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:38.818 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.818 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:38.818 [ 00:22:38.818 { 00:22:38.818 "allow_any_host": true, 00:22:38.818 "hosts": [], 00:22:38.818 "listen_addresses": [], 00:22:38.818 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:38.818 "subtype": "Discovery" 00:22:38.818 }, 00:22:38.818 { 00:22:38.818 "allow_any_host": true, 00:22:38.818 "hosts": [], 00:22:38.818 "listen_addresses": [ 00:22:38.818 { 00:22:38.818 "adrfam": "IPv4", 00:22:38.818 "traddr": "10.0.0.2", 00:22:38.818 "trsvcid": "4420", 00:22:38.818 "trtype": "TCP" 00:22:38.818 } 00:22:38.818 ], 00:22:38.818 "max_cntlid": 65519, 00:22:38.818 "max_namespaces": 1, 00:22:38.818 "min_cntlid": 1, 00:22:38.818 "model_number": "SPDK bdev Controller", 00:22:38.818 "namespaces": [ 00:22:38.818 { 00:22:38.818 "bdev_name": "Nvme0n1", 00:22:38.818 "name": "Nvme0n1", 00:22:38.818 "nguid": "BCF14D1FF2A74CD5936637CBB87AA80B", 00:22:38.818 "nsid": 1, 00:22:38.818 "uuid": "bcf14d1f-f2a7-4cd5-9366-37cbb87aa80b" 00:22:38.818 } 00:22:38.818 ], 00:22:38.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.818 "serial_number": "SPDK00000000000001", 00:22:38.818 "subtype": "NVMe" 00:22:38.818 } 00:22:38.818 ] 00:22:38.818 15:45:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.818 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:38.818 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:38.818 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:38.818 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:22:38.818 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:38.818 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:38.818 15:45:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:39.076 15:45:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:22:39.076 15:45:34 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:22:39.076 15:45:34 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:22:39.076 15:45:34 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.076 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.076 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:39.076 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.076 15:45:34 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:39.076 15:45:34 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:39.076 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.076 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:22:39.076 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.076 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:22:39.076 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.076 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.076 rmmod nvme_tcp 00:22:39.076 rmmod nvme_fabrics 00:22:39.335 rmmod nvme_keyring 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 96841 ']' 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 96841 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 96841 ']' 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 96841 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96841 00:22:39.335 killing process with pid 96841 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96841' 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 96841 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 96841 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.335 15:45:34 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:39.335 ************************************ 00:22:39.335 END TEST nvmf_identify_passthru 00:22:39.335 ************************************ 00:22:39.335 00:22:39.335 real 0m2.178s 00:22:39.335 user 0m4.274s 00:22:39.335 sys 0m0.701s 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:39.335 15:45:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:39.593 15:45:34 -- common/autotest_common.sh@1142 -- # return 0 00:22:39.593 15:45:34 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:39.593 15:45:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:39.593 15:45:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.593 15:45:34 -- common/autotest_common.sh@10 -- # set +x 00:22:39.593 ************************************ 00:22:39.593 START TEST nvmf_dif 00:22:39.593 ************************************ 00:22:39.593 15:45:34 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:39.593 * Looking for test storage... 00:22:39.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:39.593 15:45:34 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:39.593 15:45:34 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.593 15:45:34 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.593 15:45:34 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.593 15:45:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.593 15:45:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.593 15:45:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.593 15:45:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:39.593 15:45:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.593 15:45:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:39.593 15:45:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:39.593 15:45:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:39.593 15:45:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:39.593 15:45:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.593 15:45:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:39.593 15:45:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:39.593 Cannot find device "nvmf_tgt_br" 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@155 -- # true 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:39.593 Cannot find device "nvmf_tgt_br2" 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@156 -- # true 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:39.593 Cannot find device "nvmf_tgt_br" 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:39.593 Cannot find device "nvmf_tgt_br2" 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:39.593 15:45:34 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:39.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:39.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:39.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:22:39.851 00:22:39.851 --- 10.0.0.2 ping statistics --- 00:22:39.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.851 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:39.851 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:39.851 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:39.851 00:22:39.851 --- 10.0.0.3 ping statistics --- 00:22:39.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.851 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:39.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:39.851 00:22:39.851 --- 10.0.0.1 ping statistics --- 00:22:39.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.851 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:22:39.851 15:45:34 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:39.852 15:45:34 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:40.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:40.112 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:40.112 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:40.370 15:45:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:40.370 15:45:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.370 15:45:35 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.370 15:45:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97173 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:40.370 15:45:35 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97173 00:22:40.370 15:45:35 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97173 ']' 00:22:40.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.370 15:45:35 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.370 15:45:35 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.370 15:45:35 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.370 15:45:35 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.370 15:45:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:40.370 [2024-07-15 15:45:35.356962] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:22:40.370 [2024-07-15 15:45:35.357045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.370 [2024-07-15 15:45:35.493667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.628 [2024-07-15 15:45:35.563211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.628 [2024-07-15 15:45:35.563277] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.628 [2024-07-15 15:45:35.563292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.628 [2024-07-15 15:45:35.563302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.628 [2024-07-15 15:45:35.563311] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.628 [2024-07-15 15:45:35.563342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.628 15:45:35 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.628 15:45:35 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:22:40.629 15:45:35 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.629 15:45:35 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.629 15:45:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 15:45:35 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.629 15:45:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:40.629 15:45:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:40.629 15:45:35 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.629 15:45:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 [2024-07-15 15:45:35.703854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.629 15:45:35 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.629 15:45:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:40.629 15:45:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:40.629 15:45:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.629 15:45:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 ************************************ 00:22:40.629 START TEST fio_dif_1_default 00:22:40.629 ************************************ 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 bdev_null0 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.629 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 [2024-07-15 15:45:35.751974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.887 { 00:22:40.887 "params": { 00:22:40.887 "name": "Nvme$subsystem", 00:22:40.887 "trtype": "$TEST_TRANSPORT", 00:22:40.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.887 "adrfam": "ipv4", 00:22:40.887 "trsvcid": "$NVMF_PORT", 00:22:40.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.887 "hdgst": ${hdgst:-false}, 00:22:40.887 "ddgst": ${ddgst:-false} 00:22:40.887 }, 00:22:40.887 "method": "bdev_nvme_attach_controller" 00:22:40.887 } 00:22:40.887 EOF 00:22:40.887 )") 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:40.887 "params": { 00:22:40.887 "name": "Nvme0", 00:22:40.887 "trtype": "tcp", 00:22:40.887 "traddr": "10.0.0.2", 00:22:40.887 "adrfam": "ipv4", 00:22:40.887 "trsvcid": "4420", 00:22:40.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.887 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:40.887 "hdgst": false, 00:22:40.887 "ddgst": false 00:22:40.887 }, 00:22:40.887 "method": "bdev_nvme_attach_controller" 00:22:40.887 }' 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:40.887 15:45:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:40.887 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:40.887 fio-3.35 00:22:40.887 Starting 1 thread 00:22:53.088 00:22:53.088 filename0: (groupid=0, jobs=1): err= 0: pid=97244: Mon Jul 15 15:45:46 2024 00:22:53.088 read: IOPS=2733, BW=10.7MiB/s (11.2MB/s)(107MiB/10001msec) 00:22:53.089 slat (nsec): min=5834, max=42790, avg=7417.12, stdev=3018.31 00:22:53.089 clat (usec): min=337, max=41418, avg=1441.15, stdev=6388.35 00:22:53.089 lat (usec): min=343, max=41428, avg=1448.57, stdev=6388.42 00:22:53.089 clat percentiles (usec): 00:22:53.089 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 371], 00:22:53.089 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 412], 00:22:53.089 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 465], 95.00th=[ 502], 00:22:53.089 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:22:53.089 | 99.99th=[41157] 00:22:53.089 bw ( KiB/s): min= 6656, max=15584, per=98.97%, avg=10821.05, stdev=2738.54, samples=19 00:22:53.089 iops : min= 1664, max= 3896, avg=2705.26, stdev=684.63, samples=19 00:22:53.089 lat (usec) : 500=94.86%, 750=2.56% 00:22:53.089 lat (msec) : 10=0.01%, 50=2.56% 00:22:53.089 cpu : usr=90.37%, sys=8.51%, ctx=11, majf=0, minf=9 00:22:53.089 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:53.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.089 issued rwts: total=27336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.089 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:53.089 00:22:53.089 Run status group 0 (all jobs): 00:22:53.089 READ: bw=10.7MiB/s (11.2MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=107MiB (112MB), run=10001-10001msec 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 ************************************ 00:22:53.089 END TEST fio_dif_1_default 00:22:53.089 ************************************ 00:22:53.089 00:22:53.089 real 0m10.914s 00:22:53.089 user 0m9.654s 00:22:53.089 sys 0m1.073s 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 15:45:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:53.089 15:45:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:53.089 15:45:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:53.089 15:45:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 ************************************ 00:22:53.089 START TEST fio_dif_1_multi_subsystems 00:22:53.089 ************************************ 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 bdev_null0 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 [2024-07-15 15:45:46.720874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 bdev_null1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.089 { 00:22:53.089 "params": { 00:22:53.089 "name": "Nvme$subsystem", 00:22:53.089 "trtype": "$TEST_TRANSPORT", 00:22:53.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.089 "adrfam": "ipv4", 00:22:53.089 "trsvcid": "$NVMF_PORT", 00:22:53.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.089 "hdgst": ${hdgst:-false}, 00:22:53.089 "ddgst": ${ddgst:-false} 00:22:53.089 }, 00:22:53.089 "method": "bdev_nvme_attach_controller" 00:22:53.089 } 00:22:53.089 EOF 00:22:53.089 )") 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:22:53.089 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.089 { 00:22:53.089 "params": { 00:22:53.089 "name": "Nvme$subsystem", 00:22:53.089 "trtype": "$TEST_TRANSPORT", 00:22:53.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.090 "adrfam": "ipv4", 00:22:53.090 "trsvcid": "$NVMF_PORT", 00:22:53.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.090 "hdgst": ${hdgst:-false}, 00:22:53.090 "ddgst": ${ddgst:-false} 00:22:53.090 }, 00:22:53.090 "method": "bdev_nvme_attach_controller" 00:22:53.090 } 00:22:53.090 EOF 00:22:53.090 )") 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:53.090 "params": { 00:22:53.090 "name": "Nvme0", 00:22:53.090 "trtype": "tcp", 00:22:53.090 "traddr": "10.0.0.2", 00:22:53.090 "adrfam": "ipv4", 00:22:53.090 "trsvcid": "4420", 00:22:53.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:53.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:53.090 "hdgst": false, 00:22:53.090 "ddgst": false 00:22:53.090 }, 00:22:53.090 "method": "bdev_nvme_attach_controller" 00:22:53.090 },{ 00:22:53.090 "params": { 00:22:53.090 "name": "Nvme1", 00:22:53.090 "trtype": "tcp", 00:22:53.090 "traddr": "10.0.0.2", 00:22:53.090 "adrfam": "ipv4", 00:22:53.090 "trsvcid": "4420", 00:22:53.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.090 "hdgst": false, 00:22:53.090 "ddgst": false 00:22:53.090 }, 00:22:53.090 "method": "bdev_nvme_attach_controller" 00:22:53.090 }' 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:53.090 15:45:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:53.090 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:53.090 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:53.090 fio-3.35 00:22:53.090 Starting 2 threads 00:23:03.057 00:23:03.057 filename0: (groupid=0, jobs=1): err= 0: pid=97405: Mon Jul 15 15:45:57 2024 00:23:03.057 read: IOPS=169, BW=679KiB/s (695kB/s)(6800KiB/10014msec) 00:23:03.057 slat (nsec): min=6216, max=37050, avg=8329.79, stdev=3523.83 00:23:03.057 clat (usec): min=362, max=41490, avg=23535.74, stdev=19998.33 00:23:03.057 lat (usec): min=368, max=41502, avg=23544.07, stdev=19998.33 00:23:03.057 clat percentiles (usec): 00:23:03.057 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 420], 00:23:03.057 | 30.00th=[ 441], 40.00th=[ 482], 50.00th=[40633], 60.00th=[40633], 00:23:03.057 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:03.057 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:23:03.057 | 99.99th=[41681] 00:23:03.057 bw ( KiB/s): min= 512, max= 928, per=48.08%, avg=678.40, stdev=120.90, samples=20 00:23:03.057 iops : min= 128, max= 232, avg=169.60, stdev=30.22, samples=20 00:23:03.057 lat (usec) : 500=41.00%, 750=1.18%, 1000=0.41% 00:23:03.057 lat (msec) : 2=0.24%, 50=57.18% 00:23:03.057 cpu : usr=94.37%, sys=5.25%, ctx=10, majf=0, minf=0 00:23:03.057 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:03.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.057 issued rwts: total=1700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.057 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:03.057 filename1: (groupid=0, jobs=1): err= 0: pid=97406: Mon Jul 15 15:45:57 2024 00:23:03.057 read: IOPS=183, BW=733KiB/s (751kB/s)(7360KiB/10041msec) 00:23:03.057 slat (nsec): min=6192, max=34552, avg=8298.90, stdev=3381.48 00:23:03.057 clat (usec): min=371, max=42469, avg=21801.88, stdev=20195.12 00:23:03.057 lat (usec): min=378, max=42492, avg=21810.18, stdev=20195.00 00:23:03.057 clat percentiles (usec): 00:23:03.057 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 412], 00:23:03.057 | 30.00th=[ 433], 40.00th=[ 469], 50.00th=[40633], 60.00th=[40633], 00:23:03.057 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:03.057 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42730], 00:23:03.057 | 99.99th=[42730] 00:23:03.057 bw ( KiB/s): min= 544, max= 960, per=52.05%, avg=734.40, stdev=133.96, samples=20 00:23:03.057 iops : min= 136, max= 240, avg=183.60, stdev=33.49, samples=20 00:23:03.057 lat (usec) : 500=43.48%, 750=3.10%, 1000=0.38% 00:23:03.057 lat (msec) : 2=0.22%, 50=52.83% 00:23:03.057 cpu : usr=95.58%, sys=4.03%, ctx=18, majf=0, minf=9 00:23:03.057 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:03.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.057 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.057 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:03.057 00:23:03.057 Run status group 0 (all jobs): 00:23:03.057 READ: bw=1410KiB/s (1444kB/s), 679KiB/s-733KiB/s (695kB/s-751kB/s), io=13.8MiB (14.5MB), run=10014-10041msec 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.057 00:23:03.057 real 0m11.105s 00:23:03.057 user 0m19.825s 00:23:03.057 sys 0m1.165s 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:03.057 15:45:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:03.057 ************************************ 00:23:03.057 END TEST fio_dif_1_multi_subsystems 00:23:03.057 ************************************ 00:23:03.057 15:45:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:03.057 15:45:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:03.057 15:45:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:03.057 15:45:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:03.057 15:45:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:03.057 ************************************ 00:23:03.057 START TEST fio_dif_rand_params 00:23:03.057 ************************************ 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:03.057 bdev_null0 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.057 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:03.058 [2024-07-15 15:45:57.885675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.058 { 00:23:03.058 "params": { 00:23:03.058 "name": "Nvme$subsystem", 00:23:03.058 "trtype": "$TEST_TRANSPORT", 00:23:03.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.058 "adrfam": "ipv4", 00:23:03.058 "trsvcid": "$NVMF_PORT", 00:23:03.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.058 "hdgst": ${hdgst:-false}, 00:23:03.058 "ddgst": ${ddgst:-false} 00:23:03.058 }, 00:23:03.058 "method": "bdev_nvme_attach_controller" 00:23:03.058 } 00:23:03.058 EOF 00:23:03.058 )") 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:03.058 "params": { 00:23:03.058 "name": "Nvme0", 00:23:03.058 "trtype": "tcp", 00:23:03.058 "traddr": "10.0.0.2", 00:23:03.058 "adrfam": "ipv4", 00:23:03.058 "trsvcid": "4420", 00:23:03.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:03.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:03.058 "hdgst": false, 00:23:03.058 "ddgst": false 00:23:03.058 }, 00:23:03.058 "method": "bdev_nvme_attach_controller" 00:23:03.058 }' 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:03.058 15:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:03.058 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:03.058 ... 00:23:03.058 fio-3.35 00:23:03.058 Starting 3 threads 00:23:09.648 00:23:09.648 filename0: (groupid=0, jobs=1): err= 0: pid=97562: Mon Jul 15 15:46:03 2024 00:23:09.648 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(174MiB/5009msec) 00:23:09.648 slat (nsec): min=6612, max=47671, avg=11746.11, stdev=4223.60 00:23:09.648 clat (usec): min=6032, max=51862, avg=10751.51, stdev=4241.81 00:23:09.648 lat (usec): min=6042, max=51878, avg=10763.26, stdev=4241.85 00:23:09.648 clat percentiles (usec): 00:23:09.648 | 1.00th=[ 6915], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9765], 00:23:09.648 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:23:09.648 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:23:09.648 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:23:09.648 | 99.99th=[51643] 00:23:09.648 bw ( KiB/s): min=29440, max=38144, per=37.24%, avg=35635.20, stdev=2573.62, samples=10 00:23:09.648 iops : min= 230, max= 298, avg=278.40, stdev=20.11, samples=10 00:23:09.648 lat (msec) : 10=30.39%, 20=68.53%, 50=0.36%, 100=0.72% 00:23:09.648 cpu : usr=92.45%, sys=6.05%, ctx=9, majf=0, minf=0 00:23:09.648 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:09.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.648 issued rwts: total=1395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.648 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:09.648 filename0: (groupid=0, jobs=1): err= 0: pid=97563: Mon Jul 15 15:46:03 2024 00:23:09.648 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(157MiB/5007msec) 00:23:09.648 slat (nsec): min=6499, max=46980, avg=10803.35, stdev=4335.32 00:23:09.648 clat (usec): min=5856, max=53243, avg=11976.94, stdev=4416.23 00:23:09.648 lat (usec): min=5863, max=53254, avg=11987.74, stdev=4416.43 00:23:09.648 clat percentiles (usec): 00:23:09.648 | 1.00th=[ 6915], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10814], 00:23:09.648 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:23:09.648 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:23:09.648 | 99.00th=[47449], 99.50th=[51643], 99.90th=[52691], 99.95th=[53216], 00:23:09.648 | 99.99th=[53216] 00:23:09.648 bw ( KiB/s): min=27648, max=35328, per=33.41%, avg=31974.40, stdev=2310.15, samples=10 00:23:09.648 iops : min= 216, max= 276, avg=249.80, stdev=18.05, samples=10 00:23:09.648 lat (msec) : 10=6.15%, 20=92.65%, 50=0.48%, 100=0.72% 00:23:09.648 cpu : usr=91.79%, sys=6.83%, ctx=3, majf=0, minf=0 00:23:09.648 IO depths : 1=12.0%, 2=88.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:09.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.648 issued rwts: total=1252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.648 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:09.648 filename0: (groupid=0, jobs=1): err= 0: pid=97564: Mon Jul 15 15:46:03 2024 00:23:09.648 read: IOPS=219, BW=27.4MiB/s (28.7MB/s)(137MiB/5007msec) 00:23:09.648 slat (nsec): min=6447, max=42638, avg=9774.87, stdev=4425.41 00:23:09.648 clat (usec): min=3777, max=17435, avg=13655.70, stdev=2758.53 00:23:09.648 lat (usec): min=3784, max=17450, avg=13665.47, stdev=2758.88 00:23:09.648 clat percentiles (usec): 00:23:09.648 | 1.00th=[ 3851], 5.00th=[ 7767], 10.00th=[ 9372], 20.00th=[13304], 00:23:09.649 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:23:09.649 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15664], 95.00th=[16188], 00:23:09.649 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:23:09.649 | 99.99th=[17433] 00:23:09.649 bw ( KiB/s): min=25344, max=35328, per=29.29%, avg=28032.00, stdev=3182.03, samples=10 00:23:09.649 iops : min= 198, max= 276, avg=219.00, stdev=24.86, samples=10 00:23:09.649 lat (msec) : 4=3.01%, 10=8.47%, 20=88.52% 00:23:09.649 cpu : usr=93.47%, sys=5.17%, ctx=47, majf=0, minf=0 00:23:09.649 IO depths : 1=33.0%, 2=67.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:09.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.649 issued rwts: total=1098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.649 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:09.649 00:23:09.649 Run status group 0 (all jobs): 00:23:09.649 READ: bw=93.5MiB/s (98.0MB/s), 27.4MiB/s-34.8MiB/s (28.7MB/s-36.5MB/s), io=468MiB (491MB), run=5007-5009msec 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 bdev_null0 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 [2024-07-15 15:46:03.812863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 bdev_null1 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 bdev_null2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.649 { 00:23:09.649 "params": { 00:23:09.649 "name": "Nvme$subsystem", 00:23:09.649 "trtype": "$TEST_TRANSPORT", 00:23:09.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.649 "adrfam": "ipv4", 00:23:09.649 "trsvcid": "$NVMF_PORT", 00:23:09.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.649 "hdgst": ${hdgst:-false}, 00:23:09.649 "ddgst": ${ddgst:-false} 00:23:09.649 }, 00:23:09.649 "method": "bdev_nvme_attach_controller" 00:23:09.649 } 00:23:09.649 EOF 00:23:09.649 )") 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:09.649 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.650 { 00:23:09.650 "params": { 00:23:09.650 "name": "Nvme$subsystem", 00:23:09.650 "trtype": "$TEST_TRANSPORT", 00:23:09.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.650 "adrfam": "ipv4", 00:23:09.650 "trsvcid": "$NVMF_PORT", 00:23:09.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.650 "hdgst": ${hdgst:-false}, 00:23:09.650 "ddgst": ${ddgst:-false} 00:23:09.650 }, 00:23:09.650 "method": "bdev_nvme_attach_controller" 00:23:09.650 } 00:23:09.650 EOF 00:23:09.650 )") 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.650 { 00:23:09.650 "params": { 00:23:09.650 "name": "Nvme$subsystem", 00:23:09.650 "trtype": "$TEST_TRANSPORT", 00:23:09.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.650 "adrfam": "ipv4", 00:23:09.650 "trsvcid": "$NVMF_PORT", 00:23:09.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.650 "hdgst": ${hdgst:-false}, 00:23:09.650 "ddgst": ${ddgst:-false} 00:23:09.650 }, 00:23:09.650 "method": "bdev_nvme_attach_controller" 00:23:09.650 } 00:23:09.650 EOF 00:23:09.650 )") 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:09.650 "params": { 00:23:09.650 "name": "Nvme0", 00:23:09.650 "trtype": "tcp", 00:23:09.650 "traddr": "10.0.0.2", 00:23:09.650 "adrfam": "ipv4", 00:23:09.650 "trsvcid": "4420", 00:23:09.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:09.650 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:09.650 "hdgst": false, 00:23:09.650 "ddgst": false 00:23:09.650 }, 00:23:09.650 "method": "bdev_nvme_attach_controller" 00:23:09.650 },{ 00:23:09.650 "params": { 00:23:09.650 "name": "Nvme1", 00:23:09.650 "trtype": "tcp", 00:23:09.650 "traddr": "10.0.0.2", 00:23:09.650 "adrfam": "ipv4", 00:23:09.650 "trsvcid": "4420", 00:23:09.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.650 "hdgst": false, 00:23:09.650 "ddgst": false 00:23:09.650 }, 00:23:09.650 "method": "bdev_nvme_attach_controller" 00:23:09.650 },{ 00:23:09.650 "params": { 00:23:09.650 "name": "Nvme2", 00:23:09.650 "trtype": "tcp", 00:23:09.650 "traddr": "10.0.0.2", 00:23:09.650 "adrfam": "ipv4", 00:23:09.650 "trsvcid": "4420", 00:23:09.650 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:09.650 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.650 "hdgst": false, 00:23:09.650 "ddgst": false 00:23:09.650 }, 00:23:09.650 "method": "bdev_nvme_attach_controller" 00:23:09.650 }' 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:09.650 15:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:09.650 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:09.650 ... 00:23:09.650 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:09.650 ... 00:23:09.650 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:09.650 ... 00:23:09.650 fio-3.35 00:23:09.650 Starting 24 threads 00:23:21.879 00:23:21.879 filename0: (groupid=0, jobs=1): err= 0: pid=97659: Mon Jul 15 15:46:14 2024 00:23:21.879 read: IOPS=196, BW=786KiB/s (805kB/s)(7864KiB/10008msec) 00:23:21.879 slat (usec): min=4, max=8023, avg=16.87, stdev=202.14 00:23:21.879 clat (msec): min=35, max=174, avg=81.31, stdev=20.78 00:23:21.879 lat (msec): min=35, max=174, avg=81.32, stdev=20.78 00:23:21.879 clat percentiles (msec): 00:23:21.879 | 1.00th=[ 44], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 68], 00:23:21.879 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:23:21.879 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:23:21.879 | 99.00th=[ 138], 99.50th=[ 150], 99.90th=[ 176], 99.95th=[ 176], 00:23:21.879 | 99.99th=[ 176] 00:23:21.879 bw ( KiB/s): min= 600, max= 992, per=3.61%, avg=776.84, stdev=89.80, samples=19 00:23:21.879 iops : min= 150, max= 248, avg=194.21, stdev=22.45, samples=19 00:23:21.879 lat (msec) : 50=5.34%, 100=79.76%, 250=14.90% 00:23:21.879 cpu : usr=38.57%, sys=1.14%, ctx=1145, majf=0, minf=9 00:23:21.879 IO depths : 1=3.0%, 2=6.5%, 4=16.9%, 8=63.8%, 16=9.8%, 32=0.0%, >=64=0.0% 00:23:21.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.879 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.879 issued rwts: total=1966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.879 filename0: (groupid=0, jobs=1): err= 0: pid=97660: Mon Jul 15 15:46:14 2024 00:23:21.879 read: IOPS=200, BW=803KiB/s (822kB/s)(8036KiB/10011msec) 00:23:21.879 slat (usec): min=4, max=6020, avg=23.11, stdev=227.86 00:23:21.879 clat (msec): min=24, max=143, avg=79.57, stdev=19.44 00:23:21.879 lat (msec): min=24, max=143, avg=79.59, stdev=19.44 00:23:21.879 clat percentiles (msec): 00:23:21.879 | 1.00th=[ 43], 5.00th=[ 51], 10.00th=[ 57], 20.00th=[ 65], 00:23:21.879 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:23:21.879 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:23:21.879 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:23:21.879 | 99.99th=[ 144] 00:23:21.879 bw ( KiB/s): min= 624, max= 944, per=3.70%, avg=797.47, stdev=112.89, samples=19 00:23:21.879 iops : min= 156, max= 236, avg=199.37, stdev=28.22, samples=19 00:23:21.879 lat (msec) : 50=4.93%, 100=79.19%, 250=15.88% 00:23:21.879 cpu : usr=42.69%, sys=1.32%, ctx=1254, majf=0, minf=9 00:23:21.879 IO depths : 1=3.4%, 2=7.2%, 4=17.3%, 8=62.6%, 16=9.5%, 32=0.0%, >=64=0.0% 00:23:21.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 issued rwts: total=2009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.880 filename0: (groupid=0, jobs=1): err= 0: pid=97661: Mon Jul 15 15:46:14 2024 00:23:21.880 read: IOPS=206, BW=827KiB/s (847kB/s)(8284KiB/10018msec) 00:23:21.880 slat (nsec): min=4647, max=41571, avg=10800.90, stdev=3887.66 00:23:21.880 clat (msec): min=31, max=145, avg=77.28, stdev=19.35 00:23:21.880 lat (msec): min=31, max=145, avg=77.29, stdev=19.35 00:23:21.880 clat percentiles (msec): 00:23:21.880 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 64], 00:23:21.880 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:23:21.880 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 111], 00:23:21.880 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 146], 00:23:21.880 | 99.99th=[ 146] 00:23:21.880 bw ( KiB/s): min= 720, max= 1104, per=3.84%, avg=827.00, stdev=100.41, samples=19 00:23:21.880 iops : min= 180, max= 276, avg=206.74, stdev=25.11, samples=19 00:23:21.880 lat (msec) : 50=9.71%, 100=78.90%, 250=11.40% 00:23:21.880 cpu : usr=34.98%, sys=1.11%, ctx=1028, majf=0, minf=9 00:23:21.880 IO depths : 1=2.5%, 2=5.6%, 4=15.1%, 8=66.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:23:21.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 issued rwts: total=2071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.880 filename0: (groupid=0, jobs=1): err= 0: pid=97662: Mon Jul 15 15:46:14 2024 00:23:21.880 read: IOPS=202, BW=812KiB/s (831kB/s)(8124KiB/10009msec) 00:23:21.880 slat (usec): min=4, max=8019, avg=18.77, stdev=251.23 00:23:21.880 clat (msec): min=9, max=167, avg=78.73, stdev=20.13 00:23:21.880 lat (msec): min=9, max=167, avg=78.75, stdev=20.14 00:23:21.880 clat percentiles (msec): 00:23:21.880 | 1.00th=[ 36], 5.00th=[ 49], 10.00th=[ 59], 20.00th=[ 64], 00:23:21.880 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:23:21.880 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 121], 00:23:21.880 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 167], 00:23:21.880 | 99.99th=[ 167] 00:23:21.880 bw ( KiB/s): min= 640, max= 944, per=3.69%, avg=794.53, stdev=87.88, samples=19 00:23:21.880 iops : min= 160, max= 236, avg=198.63, stdev=21.97, samples=19 00:23:21.880 lat (msec) : 10=0.25%, 20=0.54%, 50=5.37%, 100=82.77%, 250=11.08% 00:23:21.880 cpu : usr=32.17%, sys=1.05%, ctx=1034, majf=0, minf=9 00:23:21.880 IO depths : 1=1.2%, 2=2.9%, 4=11.7%, 8=71.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:23:21.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 issued rwts: total=2031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.880 filename0: (groupid=0, jobs=1): err= 0: pid=97663: Mon Jul 15 15:46:14 2024 00:23:21.880 read: IOPS=252, BW=1008KiB/s (1033kB/s)(9.88MiB/10028msec) 00:23:21.880 slat (usec): min=4, max=8019, avg=20.01, stdev=251.80 00:23:21.880 clat (msec): min=29, max=147, avg=63.32, stdev=17.89 00:23:21.880 lat (msec): min=29, max=147, avg=63.34, stdev=17.89 00:23:21.880 clat percentiles (msec): 00:23:21.880 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 48], 00:23:21.880 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 68], 00:23:21.880 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 87], 95.00th=[ 96], 00:23:21.880 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 148], 99.95th=[ 148], 00:23:21.880 | 99.99th=[ 148] 00:23:21.880 bw ( KiB/s): min= 816, max= 1232, per=4.67%, avg=1004.85, stdev=139.57, samples=20 00:23:21.880 iops : min= 204, max= 308, avg=251.20, stdev=34.90, samples=20 00:23:21.880 lat (msec) : 50=32.08%, 100=65.39%, 250=2.53% 00:23:21.880 cpu : usr=37.91%, sys=1.18%, ctx=1036, majf=0, minf=9 00:23:21.880 IO depths : 1=0.7%, 2=1.6%, 4=8.2%, 8=76.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:23:21.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 issued rwts: total=2528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.880 filename0: (groupid=0, jobs=1): err= 0: pid=97664: Mon Jul 15 15:46:14 2024 00:23:21.880 read: IOPS=198, BW=794KiB/s (813kB/s)(7944KiB/10006msec) 00:23:21.880 slat (usec): min=4, max=8023, avg=14.35, stdev=179.84 00:23:21.880 clat (msec): min=35, max=144, avg=80.51, stdev=19.29 00:23:21.880 lat (msec): min=35, max=144, avg=80.52, stdev=19.29 00:23:21.880 clat percentiles (msec): 00:23:21.880 | 1.00th=[ 41], 5.00th=[ 49], 10.00th=[ 60], 20.00th=[ 68], 00:23:21.880 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 84], 00:23:21.880 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:23:21.880 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:23:21.880 | 99.99th=[ 144] 00:23:21.880 bw ( KiB/s): min= 640, max= 928, per=3.65%, avg=786.95, stdev=79.38, samples=19 00:23:21.880 iops : min= 160, max= 232, avg=196.74, stdev=19.85, samples=19 00:23:21.880 lat (msec) : 50=7.25%, 100=76.28%, 250=16.47% 00:23:21.880 cpu : usr=32.14%, sys=1.05%, ctx=911, majf=0, minf=9 00:23:21.880 IO depths : 1=2.5%, 2=5.3%, 4=14.6%, 8=67.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:23:21.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 complete : 0=0.0%, 4=91.1%, 8=3.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.880 filename0: (groupid=0, jobs=1): err= 0: pid=97665: Mon Jul 15 15:46:14 2024 00:23:21.880 read: IOPS=227, BW=910KiB/s (932kB/s)(9140KiB/10046msec) 00:23:21.880 slat (usec): min=4, max=8024, avg=23.06, stdev=302.05 00:23:21.880 clat (msec): min=10, max=131, avg=69.99, stdev=19.99 00:23:21.880 lat (msec): min=10, max=131, avg=70.01, stdev=19.99 00:23:21.880 clat percentiles (msec): 00:23:21.880 | 1.00th=[ 18], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:23:21.880 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:23:21.880 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 108], 00:23:21.880 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 132], 99.95th=[ 132], 00:23:21.880 | 99.99th=[ 132] 00:23:21.880 bw ( KiB/s): min= 638, max= 1200, per=4.22%, avg=907.50, stdev=132.88, samples=20 00:23:21.880 iops : min= 159, max= 300, avg=226.85, stdev=33.27, samples=20 00:23:21.880 lat (msec) : 20=1.40%, 50=16.98%, 100=74.27%, 250=7.35% 00:23:21.880 cpu : usr=38.68%, sys=1.27%, ctx=1205, majf=0, minf=9 00:23:21.880 IO depths : 1=1.6%, 2=3.6%, 4=12.0%, 8=71.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:23:21.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 complete : 0=0.0%, 4=90.4%, 8=4.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 issued rwts: total=2285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.880 filename0: (groupid=0, jobs=1): err= 0: pid=97666: Mon Jul 15 15:46:14 2024 00:23:21.880 read: IOPS=241, BW=967KiB/s (990kB/s)(9704KiB/10039msec) 00:23:21.880 slat (usec): min=4, max=4020, avg=13.60, stdev=110.29 00:23:21.880 clat (msec): min=31, max=147, avg=66.03, stdev=21.15 00:23:21.880 lat (msec): min=31, max=147, avg=66.04, stdev=21.15 00:23:21.880 clat percentiles (msec): 00:23:21.880 | 1.00th=[ 37], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 47], 00:23:21.880 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 69], 00:23:21.880 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 99], 95.00th=[ 107], 00:23:21.880 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:23:21.880 | 99.99th=[ 148] 00:23:21.880 bw ( KiB/s): min= 552, max= 1280, per=4.48%, avg=963.45, stdev=187.54, samples=20 00:23:21.880 iops : min= 138, max= 320, avg=240.85, stdev=46.88, samples=20 00:23:21.880 lat (msec) : 50=25.19%, 100=66.41%, 250=8.41% 00:23:21.880 cpu : usr=44.68%, sys=1.45%, ctx=1448, majf=0, minf=9 00:23:21.880 IO depths : 1=1.7%, 2=3.6%, 4=12.0%, 8=71.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:23:21.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 issued rwts: total=2426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.880 filename1: (groupid=0, jobs=1): err= 0: pid=97667: Mon Jul 15 15:46:14 2024 00:23:21.880 read: IOPS=254, BW=1017KiB/s (1041kB/s)(9.98MiB/10045msec) 00:23:21.880 slat (usec): min=6, max=8028, avg=23.07, stdev=296.52 00:23:21.880 clat (msec): min=2, max=127, avg=62.68, stdev=21.10 00:23:21.880 lat (msec): min=2, max=127, avg=62.70, stdev=21.11 00:23:21.880 clat percentiles (msec): 00:23:21.880 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 42], 20.00th=[ 48], 00:23:21.880 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 71], 00:23:21.880 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 101], 00:23:21.880 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 128], 99.95th=[ 128], 00:23:21.880 | 99.99th=[ 129] 00:23:21.880 bw ( KiB/s): min= 766, max= 1920, per=4.71%, avg=1014.80, stdev=246.10, samples=20 00:23:21.880 iops : min= 191, max= 480, avg=253.65, stdev=61.56, samples=20 00:23:21.880 lat (msec) : 4=1.25%, 10=1.10%, 20=1.41%, 50=27.06%, 100=64.64% 00:23:21.880 lat (msec) : 250=4.54% 00:23:21.880 cpu : usr=39.49%, sys=1.37%, ctx=1363, majf=0, minf=9 00:23:21.880 IO depths : 1=2.6%, 2=5.5%, 4=13.9%, 8=67.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:23:21.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.880 issued rwts: total=2554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.880 filename1: (groupid=0, jobs=1): err= 0: pid=97668: Mon Jul 15 15:46:14 2024 00:23:21.880 read: IOPS=247, BW=989KiB/s (1012kB/s)(9932KiB/10047msec) 00:23:21.880 slat (usec): min=6, max=6030, avg=14.31, stdev=145.17 00:23:21.880 clat (msec): min=8, max=123, avg=64.52, stdev=19.69 00:23:21.880 lat (msec): min=8, max=124, avg=64.54, stdev=19.69 00:23:21.880 clat percentiles (msec): 00:23:21.880 | 1.00th=[ 13], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 48], 00:23:21.880 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 70], 00:23:21.880 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 102], 00:23:21.880 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:23:21.880 | 99.99th=[ 125] 00:23:21.880 bw ( KiB/s): min= 768, max= 1352, per=4.58%, avg=986.70, stdev=154.01, samples=20 00:23:21.880 iops : min= 192, max= 338, avg=246.65, stdev=38.51, samples=20 00:23:21.880 lat (msec) : 10=0.64%, 20=0.93%, 50=24.81%, 100=68.30%, 250=5.32% 00:23:21.880 cpu : usr=42.38%, sys=1.30%, ctx=1288, majf=0, minf=9 00:23:21.880 IO depths : 1=0.8%, 2=1.7%, 4=8.0%, 8=76.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:23:21.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 issued rwts: total=2483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.881 filename1: (groupid=0, jobs=1): err= 0: pid=97669: Mon Jul 15 15:46:14 2024 00:23:21.881 read: IOPS=239, BW=957KiB/s (980kB/s)(9600KiB/10028msec) 00:23:21.881 slat (usec): min=4, max=8029, avg=22.26, stdev=291.84 00:23:21.881 clat (msec): min=9, max=155, avg=66.68, stdev=21.27 00:23:21.881 lat (msec): min=9, max=155, avg=66.70, stdev=21.26 00:23:21.881 clat percentiles (msec): 00:23:21.881 | 1.00th=[ 15], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:23:21.881 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 66], 60.00th=[ 72], 00:23:21.881 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:23:21.881 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 157], 99.95th=[ 157], 00:23:21.881 | 99.99th=[ 157] 00:23:21.881 bw ( KiB/s): min= 638, max= 1282, per=4.44%, avg=956.00, stdev=163.18, samples=20 00:23:21.881 iops : min= 159, max= 320, avg=238.95, stdev=40.79, samples=20 00:23:21.881 lat (msec) : 10=0.67%, 20=0.67%, 50=24.13%, 100=68.33%, 250=6.21% 00:23:21.881 cpu : usr=32.48%, sys=0.86%, ctx=1013, majf=0, minf=9 00:23:21.881 IO depths : 1=1.2%, 2=2.8%, 4=10.1%, 8=73.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:23:21.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 complete : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 issued rwts: total=2400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.881 filename1: (groupid=0, jobs=1): err= 0: pid=97670: Mon Jul 15 15:46:14 2024 00:23:21.881 read: IOPS=210, BW=843KiB/s (864kB/s)(8440KiB/10007msec) 00:23:21.881 slat (nsec): min=4677, max=34903, avg=10958.78, stdev=3811.68 00:23:21.881 clat (msec): min=9, max=166, avg=75.80, stdev=20.90 00:23:21.881 lat (msec): min=9, max=166, avg=75.81, stdev=20.90 00:23:21.881 clat percentiles (msec): 00:23:21.881 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 59], 00:23:21.881 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:23:21.881 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 110], 00:23:21.881 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 167], 99.95th=[ 167], 00:23:21.881 | 99.99th=[ 167] 00:23:21.881 bw ( KiB/s): min= 640, max= 1248, per=3.90%, avg=838.74, stdev=138.93, samples=19 00:23:21.881 iops : min= 160, max= 312, avg=209.68, stdev=34.73, samples=19 00:23:21.881 lat (msec) : 10=0.28%, 50=10.43%, 100=77.68%, 250=11.61% 00:23:21.881 cpu : usr=39.71%, sys=1.35%, ctx=1134, majf=0, minf=9 00:23:21.881 IO depths : 1=1.7%, 2=3.8%, 4=12.7%, 8=70.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:23:21.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.881 filename1: (groupid=0, jobs=1): err= 0: pid=97671: Mon Jul 15 15:46:14 2024 00:23:21.881 read: IOPS=222, BW=890KiB/s (912kB/s)(8928KiB/10027msec) 00:23:21.881 slat (usec): min=4, max=8017, avg=14.07, stdev=169.51 00:23:21.881 clat (msec): min=34, max=160, avg=71.80, stdev=20.95 00:23:21.881 lat (msec): min=34, max=160, avg=71.81, stdev=20.95 00:23:21.881 clat percentiles (msec): 00:23:21.881 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:23:21.881 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:23:21.881 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 109], 00:23:21.881 | 99.00th=[ 138], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:23:21.881 | 99.99th=[ 161] 00:23:21.881 bw ( KiB/s): min= 640, max= 1138, per=4.12%, avg=886.80, stdev=160.63, samples=20 00:23:21.881 iops : min= 160, max= 284, avg=221.60, stdev=40.12, samples=20 00:23:21.881 lat (msec) : 50=16.94%, 100=75.76%, 250=7.30% 00:23:21.881 cpu : usr=32.29%, sys=0.94%, ctx=923, majf=0, minf=9 00:23:21.881 IO depths : 1=1.0%, 2=2.3%, 4=10.1%, 8=74.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:23:21.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.881 filename1: (groupid=0, jobs=1): err= 0: pid=97672: Mon Jul 15 15:46:14 2024 00:23:21.881 read: IOPS=227, BW=908KiB/s (930kB/s)(9104KiB/10026msec) 00:23:21.881 slat (usec): min=3, max=4017, avg=12.86, stdev=84.08 00:23:21.881 clat (msec): min=33, max=137, avg=70.30, stdev=19.87 00:23:21.881 lat (msec): min=33, max=137, avg=70.32, stdev=19.87 00:23:21.881 clat percentiles (msec): 00:23:21.881 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:23:21.881 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:23:21.881 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 107], 00:23:21.881 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 00:23:21.881 | 99.99th=[ 138] 00:23:21.881 bw ( KiB/s): min= 600, max= 1088, per=4.22%, avg=907.55, stdev=147.88, samples=20 00:23:21.881 iops : min= 150, max= 272, avg=226.85, stdev=36.93, samples=20 00:23:21.881 lat (msec) : 50=18.32%, 100=73.59%, 250=8.08% 00:23:21.881 cpu : usr=41.25%, sys=1.21%, ctx=1231, majf=0, minf=9 00:23:21.881 IO depths : 1=1.7%, 2=3.7%, 4=11.8%, 8=71.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:23:21.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 issued rwts: total=2276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.881 filename1: (groupid=0, jobs=1): err= 0: pid=97673: Mon Jul 15 15:46:14 2024 00:23:21.881 read: IOPS=207, BW=831KiB/s (851kB/s)(8328KiB/10025msec) 00:23:21.881 slat (usec): min=4, max=8019, avg=16.60, stdev=196.28 00:23:21.881 clat (msec): min=32, max=169, avg=76.90, stdev=22.44 00:23:21.881 lat (msec): min=32, max=169, avg=76.92, stdev=22.44 00:23:21.881 clat percentiles (msec): 00:23:21.881 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 51], 20.00th=[ 61], 00:23:21.881 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 81], 00:23:21.881 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 121], 00:23:21.881 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 169], 00:23:21.881 | 99.99th=[ 169] 00:23:21.881 bw ( KiB/s): min= 512, max= 1104, per=3.83%, avg=824.42, stdev=149.58, samples=19 00:23:21.881 iops : min= 128, max= 276, avg=206.21, stdev=37.55, samples=19 00:23:21.881 lat (msec) : 50=10.37%, 100=76.99%, 250=12.63% 00:23:21.881 cpu : usr=40.61%, sys=1.38%, ctx=1225, majf=0, minf=9 00:23:21.881 IO depths : 1=1.7%, 2=3.8%, 4=11.8%, 8=70.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:21.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 issued rwts: total=2082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.881 filename1: (groupid=0, jobs=1): err= 0: pid=97674: Mon Jul 15 15:46:14 2024 00:23:21.881 read: IOPS=200, BW=804KiB/s (823kB/s)(8040KiB/10002msec) 00:23:21.881 slat (usec): min=4, max=4022, avg=17.07, stdev=154.91 00:23:21.881 clat (msec): min=4, max=157, avg=79.49, stdev=20.26 00:23:21.881 lat (msec): min=4, max=157, avg=79.50, stdev=20.26 00:23:21.881 clat percentiles (msec): 00:23:21.881 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 68], 00:23:21.881 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:23:21.881 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 110], 00:23:21.881 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 159], 00:23:21.881 | 99.99th=[ 159] 00:23:21.881 bw ( KiB/s): min= 563, max= 936, per=3.70%, avg=797.63, stdev=108.29, samples=19 00:23:21.881 iops : min= 140, max= 234, avg=199.37, stdev=27.16, samples=19 00:23:21.881 lat (msec) : 10=0.80%, 50=6.22%, 100=78.01%, 250=14.98% 00:23:21.881 cpu : usr=40.25%, sys=1.18%, ctx=1189, majf=0, minf=9 00:23:21.881 IO depths : 1=3.6%, 2=7.5%, 4=17.4%, 8=62.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:23:21.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 complete : 0=0.0%, 4=92.1%, 8=2.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.881 filename2: (groupid=0, jobs=1): err= 0: pid=97675: Mon Jul 15 15:46:14 2024 00:23:21.881 read: IOPS=213, BW=853KiB/s (873kB/s)(8548KiB/10021msec) 00:23:21.881 slat (usec): min=6, max=8024, avg=14.84, stdev=173.38 00:23:21.881 clat (msec): min=32, max=176, avg=74.92, stdev=21.93 00:23:21.881 lat (msec): min=32, max=176, avg=74.94, stdev=21.94 00:23:21.881 clat percentiles (msec): 00:23:21.881 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:23:21.881 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 79], 00:23:21.881 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 114], 00:23:21.881 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 178], 99.95th=[ 178], 00:23:21.881 | 99.99th=[ 178] 00:23:21.881 bw ( KiB/s): min= 592, max= 1120, per=3.94%, avg=848.45, stdev=129.05, samples=20 00:23:21.881 iops : min= 148, max= 280, avg=212.10, stdev=32.27, samples=20 00:23:21.881 lat (msec) : 50=16.14%, 100=71.27%, 250=12.59% 00:23:21.881 cpu : usr=36.21%, sys=1.02%, ctx=1002, majf=0, minf=9 00:23:21.881 IO depths : 1=2.1%, 2=4.5%, 4=13.5%, 8=68.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:23:21.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.881 filename2: (groupid=0, jobs=1): err= 0: pid=97676: Mon Jul 15 15:46:14 2024 00:23:21.881 read: IOPS=214, BW=856KiB/s (877kB/s)(8584KiB/10025msec) 00:23:21.881 slat (usec): min=4, max=11019, avg=19.58, stdev=293.84 00:23:21.881 clat (msec): min=31, max=178, avg=74.53, stdev=20.67 00:23:21.881 lat (msec): min=31, max=178, avg=74.55, stdev=20.69 00:23:21.881 clat percentiles (msec): 00:23:21.881 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:23:21.881 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:23:21.881 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 109], 00:23:21.881 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 178], 99.95th=[ 178], 00:23:21.881 | 99.99th=[ 178] 00:23:21.881 bw ( KiB/s): min= 640, max= 1120, per=3.97%, avg=855.65, stdev=119.05, samples=20 00:23:21.881 iops : min= 160, max= 280, avg=213.90, stdev=29.76, samples=20 00:23:21.881 lat (msec) : 50=14.49%, 100=73.49%, 250=12.02% 00:23:21.881 cpu : usr=32.18%, sys=0.98%, ctx=941, majf=0, minf=9 00:23:21.881 IO depths : 1=1.1%, 2=2.5%, 4=10.8%, 8=73.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:23:21.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.881 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.882 filename2: (groupid=0, jobs=1): err= 0: pid=97677: Mon Jul 15 15:46:14 2024 00:23:21.882 read: IOPS=239, BW=957KiB/s (980kB/s)(9620KiB/10053msec) 00:23:21.882 slat (usec): min=4, max=8021, avg=17.69, stdev=230.94 00:23:21.882 clat (msec): min=3, max=155, avg=66.68, stdev=21.28 00:23:21.882 lat (msec): min=3, max=155, avg=66.70, stdev=21.28 00:23:21.882 clat percentiles (msec): 00:23:21.882 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 48], 00:23:21.882 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:23:21.882 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 105], 00:23:21.882 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 144], 99.95th=[ 144], 00:23:21.882 | 99.99th=[ 157] 00:23:21.882 bw ( KiB/s): min= 736, max= 1536, per=4.44%, avg=955.50, stdev=181.93, samples=20 00:23:21.882 iops : min= 184, max= 384, avg=238.85, stdev=45.50, samples=20 00:23:21.882 lat (msec) : 4=0.29%, 10=1.33%, 20=0.79%, 50=23.53%, 100=68.11% 00:23:21.882 lat (msec) : 250=5.95% 00:23:21.882 cpu : usr=38.12%, sys=1.15%, ctx=1158, majf=0, minf=9 00:23:21.882 IO depths : 1=1.2%, 2=3.0%, 4=11.8%, 8=71.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:23:21.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 issued rwts: total=2405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.882 filename2: (groupid=0, jobs=1): err= 0: pid=97678: Mon Jul 15 15:46:14 2024 00:23:21.882 read: IOPS=251, BW=1004KiB/s (1028kB/s)(9.83MiB/10025msec) 00:23:21.882 slat (usec): min=3, max=4024, avg=13.24, stdev=100.16 00:23:21.882 clat (msec): min=26, max=147, avg=63.59, stdev=20.39 00:23:21.882 lat (msec): min=26, max=147, avg=63.60, stdev=20.39 00:23:21.882 clat percentiles (msec): 00:23:21.882 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 47], 00:23:21.882 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 68], 00:23:21.882 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 105], 00:23:21.882 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:23:21.882 | 99.99th=[ 148] 00:23:21.882 bw ( KiB/s): min= 656, max= 1248, per=4.66%, avg=1002.05, stdev=143.98, samples=20 00:23:21.882 iops : min= 164, max= 312, avg=250.50, stdev=35.99, samples=20 00:23:21.882 lat (msec) : 50=36.39%, 100=56.69%, 250=6.91% 00:23:21.882 cpu : usr=41.62%, sys=1.17%, ctx=1144, majf=0, minf=9 00:23:21.882 IO depths : 1=1.3%, 2=2.9%, 4=11.2%, 8=72.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:21.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 issued rwts: total=2517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.882 filename2: (groupid=0, jobs=1): err= 0: pid=97679: Mon Jul 15 15:46:14 2024 00:23:21.882 read: IOPS=241, BW=964KiB/s (988kB/s)(9668KiB/10024msec) 00:23:21.882 slat (usec): min=3, max=8023, avg=13.68, stdev=163.02 00:23:21.882 clat (msec): min=32, max=167, avg=66.27, stdev=20.78 00:23:21.882 lat (msec): min=32, max=167, avg=66.28, stdev=20.78 00:23:21.882 clat percentiles (msec): 00:23:21.882 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 48], 00:23:21.882 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 72], 00:23:21.882 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:23:21.882 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 169], 00:23:21.882 | 99.99th=[ 169] 00:23:21.882 bw ( KiB/s): min= 697, max= 1232, per=4.46%, avg=960.50, stdev=144.04, samples=20 00:23:21.882 iops : min= 174, max= 308, avg=240.00, stdev=36.01, samples=20 00:23:21.882 lat (msec) : 50=28.38%, 100=65.78%, 250=5.83% 00:23:21.882 cpu : usr=35.41%, sys=0.95%, ctx=1014, majf=0, minf=9 00:23:21.882 IO depths : 1=0.9%, 2=1.9%, 4=8.8%, 8=75.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:23:21.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.882 filename2: (groupid=0, jobs=1): err= 0: pid=97680: Mon Jul 15 15:46:14 2024 00:23:21.882 read: IOPS=250, BW=1003KiB/s (1027kB/s)(9.80MiB/10010msec) 00:23:21.882 slat (usec): min=3, max=8029, avg=18.12, stdev=234.17 00:23:21.882 clat (msec): min=17, max=143, avg=63.72, stdev=19.48 00:23:21.882 lat (msec): min=17, max=143, avg=63.74, stdev=19.48 00:23:21.882 clat percentiles (msec): 00:23:21.882 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:23:21.882 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 67], 00:23:21.882 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 93], 95.00th=[ 103], 00:23:21.882 | 99.00th=[ 121], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:23:21.882 | 99.99th=[ 144] 00:23:21.882 bw ( KiB/s): min= 688, max= 1248, per=4.63%, avg=997.10, stdev=139.13, samples=20 00:23:21.882 iops : min= 172, max= 312, avg=249.25, stdev=34.81, samples=20 00:23:21.882 lat (msec) : 20=0.64%, 50=31.17%, 100=62.81%, 250=5.38% 00:23:21.882 cpu : usr=36.92%, sys=1.16%, ctx=1289, majf=0, minf=9 00:23:21.882 IO depths : 1=0.4%, 2=0.8%, 4=6.1%, 8=78.7%, 16=14.0%, 32=0.0%, >=64=0.0% 00:23:21.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 complete : 0=0.0%, 4=89.1%, 8=7.2%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 issued rwts: total=2509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.882 filename2: (groupid=0, jobs=1): err= 0: pid=97681: Mon Jul 15 15:46:14 2024 00:23:21.882 read: IOPS=224, BW=897KiB/s (919kB/s)(9004KiB/10033msec) 00:23:21.882 slat (usec): min=5, max=4024, avg=14.50, stdev=119.60 00:23:21.882 clat (msec): min=35, max=142, avg=71.23, stdev=19.48 00:23:21.882 lat (msec): min=35, max=142, avg=71.24, stdev=19.48 00:23:21.882 clat percentiles (msec): 00:23:21.882 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 53], 00:23:21.882 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 73], 00:23:21.882 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 103], 00:23:21.882 | 99.00th=[ 126], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:23:21.882 | 99.99th=[ 144] 00:23:21.882 bw ( KiB/s): min= 640, max= 1256, per=4.15%, avg=893.70, stdev=144.81, samples=20 00:23:21.882 iops : min= 160, max= 314, avg=223.40, stdev=36.23, samples=20 00:23:21.882 lat (msec) : 50=17.24%, 100=75.92%, 250=6.84% 00:23:21.882 cpu : usr=42.00%, sys=1.39%, ctx=1423, majf=0, minf=9 00:23:21.882 IO depths : 1=2.0%, 2=4.2%, 4=12.0%, 8=70.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:23:21.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.882 filename2: (groupid=0, jobs=1): err= 0: pid=97682: Mon Jul 15 15:46:14 2024 00:23:21.882 read: IOPS=222, BW=891KiB/s (912kB/s)(8932KiB/10028msec) 00:23:21.882 slat (nsec): min=4684, max=35811, avg=11191.49, stdev=3876.98 00:23:21.882 clat (msec): min=31, max=135, avg=71.72, stdev=18.66 00:23:21.882 lat (msec): min=31, max=135, avg=71.73, stdev=18.66 00:23:21.882 clat percentiles (msec): 00:23:21.882 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:23:21.882 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:23:21.882 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 108], 00:23:21.882 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 132], 00:23:21.882 | 99.99th=[ 136] 00:23:21.882 bw ( KiB/s): min= 640, max= 1184, per=4.12%, avg=887.20, stdev=149.11, samples=20 00:23:21.882 iops : min= 160, max= 296, avg=221.70, stdev=37.25, samples=20 00:23:21.882 lat (msec) : 50=15.58%, 100=77.88%, 250=6.54% 00:23:21.882 cpu : usr=32.23%, sys=0.99%, ctx=986, majf=0, minf=9 00:23:21.882 IO depths : 1=0.7%, 2=2.0%, 4=9.1%, 8=75.4%, 16=12.9%, 32=0.0%, >=64=0.0% 00:23:21.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.882 issued rwts: total=2233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:21.882 00:23:21.882 Run status group 0 (all jobs): 00:23:21.882 READ: bw=21.0MiB/s (22.0MB/s), 786KiB/s-1017KiB/s (805kB/s-1041kB/s), io=211MiB (221MB), run=10002-10053msec 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.882 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 bdev_null0 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 [2024-07-15 15:46:15.158213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 bdev_null1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.883 { 00:23:21.883 "params": { 00:23:21.883 "name": "Nvme$subsystem", 00:23:21.883 "trtype": "$TEST_TRANSPORT", 00:23:21.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.883 "adrfam": "ipv4", 00:23:21.883 "trsvcid": "$NVMF_PORT", 00:23:21.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.883 "hdgst": ${hdgst:-false}, 00:23:21.883 "ddgst": ${ddgst:-false} 00:23:21.883 }, 00:23:21.883 "method": "bdev_nvme_attach_controller" 00:23:21.883 } 00:23:21.883 EOF 00:23:21.883 )") 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.883 { 00:23:21.883 "params": { 00:23:21.883 "name": "Nvme$subsystem", 00:23:21.883 "trtype": "$TEST_TRANSPORT", 00:23:21.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.883 "adrfam": "ipv4", 00:23:21.883 "trsvcid": "$NVMF_PORT", 00:23:21.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.883 "hdgst": ${hdgst:-false}, 00:23:21.883 "ddgst": ${ddgst:-false} 00:23:21.883 }, 00:23:21.883 "method": "bdev_nvme_attach_controller" 00:23:21.883 } 00:23:21.883 EOF 00:23:21.883 )") 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:21.883 "params": { 00:23:21.883 "name": "Nvme0", 00:23:21.883 "trtype": "tcp", 00:23:21.883 "traddr": "10.0.0.2", 00:23:21.883 "adrfam": "ipv4", 00:23:21.883 "trsvcid": "4420", 00:23:21.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:21.883 "hdgst": false, 00:23:21.883 "ddgst": false 00:23:21.883 }, 00:23:21.883 "method": "bdev_nvme_attach_controller" 00:23:21.883 },{ 00:23:21.883 "params": { 00:23:21.883 "name": "Nvme1", 00:23:21.883 "trtype": "tcp", 00:23:21.883 "traddr": "10.0.0.2", 00:23:21.883 "adrfam": "ipv4", 00:23:21.883 "trsvcid": "4420", 00:23:21.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.883 "hdgst": false, 00:23:21.883 "ddgst": false 00:23:21.883 }, 00:23:21.883 "method": "bdev_nvme_attach_controller" 00:23:21.883 }' 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:21.883 15:46:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:21.884 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:21.884 ... 00:23:21.884 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:21.884 ... 00:23:21.884 fio-3.35 00:23:21.884 Starting 4 threads 00:23:26.095 00:23:26.095 filename0: (groupid=0, jobs=1): err= 0: pid=97809: Mon Jul 15 15:46:20 2024 00:23:26.095 read: IOPS=2021, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5002msec) 00:23:26.095 slat (nsec): min=3246, max=56528, avg=14941.97, stdev=5202.31 00:23:26.095 clat (usec): min=2171, max=6290, avg=3882.31, stdev=174.08 00:23:26.095 lat (usec): min=2179, max=6303, avg=3897.25, stdev=174.51 00:23:26.095 clat percentiles (usec): 00:23:26.095 | 1.00th=[ 3654], 5.00th=[ 3720], 10.00th=[ 3752], 20.00th=[ 3785], 00:23:26.095 | 30.00th=[ 3818], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:23:26.095 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4080], 95.00th=[ 4146], 00:23:26.095 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 5800], 99.95th=[ 5866], 00:23:26.095 | 99.99th=[ 5932] 00:23:26.095 bw ( KiB/s): min=15872, max=16384, per=24.99%, avg=16170.67, stdev=156.77, samples=9 00:23:26.095 iops : min= 1984, max= 2048, avg=2021.33, stdev=19.60, samples=9 00:23:26.095 lat (msec) : 4=82.56%, 10=17.44% 00:23:26.095 cpu : usr=93.56%, sys=5.30%, ctx=8, majf=0, minf=9 00:23:26.095 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.095 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.095 issued rwts: total=10112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.095 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:26.095 filename0: (groupid=0, jobs=1): err= 0: pid=97810: Mon Jul 15 15:46:20 2024 00:23:26.095 read: IOPS=2021, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5001msec) 00:23:26.095 slat (nsec): min=3668, max=59890, avg=13176.16, stdev=5460.15 00:23:26.095 clat (usec): min=2878, max=5134, avg=3897.30, stdev=145.10 00:23:26.095 lat (usec): min=2890, max=5165, avg=3910.48, stdev=144.95 00:23:26.095 clat percentiles (usec): 00:23:26.095 | 1.00th=[ 3654], 5.00th=[ 3720], 10.00th=[ 3752], 20.00th=[ 3785], 00:23:26.095 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3916], 00:23:26.095 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4080], 95.00th=[ 4178], 00:23:26.095 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4752], 99.95th=[ 4883], 00:23:26.095 | 99.99th=[ 5080] 00:23:26.095 bw ( KiB/s): min=15872, max=16384, per=25.01%, avg=16181.22, stdev=156.73, samples=9 00:23:26.095 iops : min= 1984, max= 2048, avg=2022.56, stdev=19.56, samples=9 00:23:26.095 lat (msec) : 4=80.23%, 10=19.77% 00:23:26.095 cpu : usr=93.76%, sys=5.10%, ctx=7, majf=0, minf=0 00:23:26.096 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.096 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.096 issued rwts: total=10112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.096 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:26.096 filename1: (groupid=0, jobs=1): err= 0: pid=97811: Mon Jul 15 15:46:20 2024 00:23:26.096 read: IOPS=2025, BW=15.8MiB/s (16.6MB/s)(79.2MiB/5004msec) 00:23:26.096 slat (nsec): min=3278, max=52239, avg=8507.50, stdev=3222.86 00:23:26.096 clat (usec): min=968, max=4518, avg=3905.44, stdev=187.59 00:23:26.096 lat (usec): min=975, max=4526, avg=3913.95, stdev=187.82 00:23:26.096 clat percentiles (usec): 00:23:26.096 | 1.00th=[ 3687], 5.00th=[ 3752], 10.00th=[ 3785], 20.00th=[ 3818], 00:23:26.096 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3916], 00:23:26.096 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4113], 95.00th=[ 4178], 00:23:26.096 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4424], 99.95th=[ 4490], 00:23:26.096 | 99.99th=[ 4490] 00:23:26.096 bw ( KiB/s): min=15872, max=16512, per=25.06%, avg=16213.33, stdev=192.00, samples=9 00:23:26.096 iops : min= 1984, max= 2064, avg=2026.67, stdev=24.00, samples=9 00:23:26.096 lat (usec) : 1000=0.03% 00:23:26.096 lat (msec) : 2=0.24%, 4=78.20%, 10=21.54% 00:23:26.096 cpu : usr=92.88%, sys=5.96%, ctx=8, majf=0, minf=0 00:23:26.096 IO depths : 1=11.2%, 2=25.0%, 4=50.0%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.096 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.096 issued rwts: total=10136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.096 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:26.096 filename1: (groupid=0, jobs=1): err= 0: pid=97812: Mon Jul 15 15:46:20 2024 00:23:26.096 read: IOPS=2021, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5003msec) 00:23:26.096 slat (nsec): min=3432, max=56709, avg=15384.22, stdev=4672.15 00:23:26.096 clat (usec): min=2093, max=6078, avg=3881.52, stdev=159.75 00:23:26.096 lat (usec): min=2106, max=6091, avg=3896.90, stdev=160.11 00:23:26.096 clat percentiles (usec): 00:23:26.096 | 1.00th=[ 3654], 5.00th=[ 3720], 10.00th=[ 3752], 20.00th=[ 3785], 00:23:26.096 | 30.00th=[ 3818], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:23:26.096 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4080], 95.00th=[ 4146], 00:23:26.096 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4883], 99.95th=[ 6063], 00:23:26.096 | 99.99th=[ 6063] 00:23:26.096 bw ( KiB/s): min=15872, max=16384, per=24.99%, avg=16170.67, stdev=156.77, samples=9 00:23:26.096 iops : min= 1984, max= 2048, avg=2021.33, stdev=19.60, samples=9 00:23:26.096 lat (msec) : 4=82.62%, 10=17.38% 00:23:26.096 cpu : usr=93.86%, sys=4.96%, ctx=11, majf=0, minf=0 00:23:26.096 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.096 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.096 issued rwts: total=10112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.096 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:26.096 00:23:26.096 Run status group 0 (all jobs): 00:23:26.096 READ: bw=63.2MiB/s (66.3MB/s), 15.8MiB/s-15.8MiB/s (16.6MB/s-16.6MB/s), io=316MiB (332MB), run=5001-5004msec 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.096 ************************************ 00:23:26.096 END TEST fio_dif_rand_params 00:23:26.096 ************************************ 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.096 00:23:26.096 real 0m23.335s 00:23:26.096 user 2m5.683s 00:23:26.096 sys 0m5.462s 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:26.096 15:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.355 15:46:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:26.355 15:46:21 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:26.355 15:46:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:26.355 15:46:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:26.355 15:46:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:26.355 ************************************ 00:23:26.355 START TEST fio_dif_digest 00:23:26.355 ************************************ 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:26.355 bdev_null0 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:26.355 [2024-07-15 15:46:21.279797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.355 { 00:23:26.355 "params": { 00:23:26.355 "name": "Nvme$subsystem", 00:23:26.355 "trtype": "$TEST_TRANSPORT", 00:23:26.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.355 "adrfam": "ipv4", 00:23:26.355 "trsvcid": "$NVMF_PORT", 00:23:26.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.355 "hdgst": ${hdgst:-false}, 00:23:26.355 "ddgst": ${ddgst:-false} 00:23:26.355 }, 00:23:26.355 "method": "bdev_nvme_attach_controller" 00:23:26.355 } 00:23:26.355 EOF 00:23:26.355 )") 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:26.355 "params": { 00:23:26.355 "name": "Nvme0", 00:23:26.355 "trtype": "tcp", 00:23:26.355 "traddr": "10.0.0.2", 00:23:26.355 "adrfam": "ipv4", 00:23:26.355 "trsvcid": "4420", 00:23:26.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:26.355 "hdgst": true, 00:23:26.355 "ddgst": true 00:23:26.355 }, 00:23:26.355 "method": "bdev_nvme_attach_controller" 00:23:26.355 }' 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:26.355 15:46:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:26.614 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:26.614 ... 00:23:26.614 fio-3.35 00:23:26.614 Starting 3 threads 00:23:38.811 00:23:38.811 filename0: (groupid=0, jobs=1): err= 0: pid=97917: Mon Jul 15 15:46:31 2024 00:23:38.811 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(318MiB/10048msec) 00:23:38.811 slat (nsec): min=3496, max=58004, avg=12738.35, stdev=4450.79 00:23:38.811 clat (usec): min=9037, max=53312, avg=11817.24, stdev=1966.04 00:23:38.811 lat (usec): min=9049, max=53325, avg=11829.98, stdev=1966.17 00:23:38.811 clat percentiles (usec): 00:23:38.811 | 1.00th=[ 9896], 5.00th=[10421], 10.00th=[10683], 20.00th=[11076], 00:23:38.811 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:23:38.811 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:23:38.811 | 99.00th=[13960], 99.50th=[14222], 99.90th=[52691], 99.95th=[53216], 00:23:38.811 | 99.99th=[53216] 00:23:38.811 bw ( KiB/s): min=28416, max=34560, per=38.94%, avg=32537.60, stdev=1419.04, samples=20 00:23:38.811 iops : min= 222, max= 270, avg=254.20, stdev=11.09, samples=20 00:23:38.811 lat (msec) : 10=1.14%, 20=98.66%, 50=0.04%, 100=0.16% 00:23:38.812 cpu : usr=92.37%, sys=6.21%, ctx=22, majf=0, minf=0 00:23:38.812 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:38.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.812 issued rwts: total=2544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.812 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:38.812 filename0: (groupid=0, jobs=1): err= 0: pid=97918: Mon Jul 15 15:46:31 2024 00:23:38.812 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(279MiB/10006msec) 00:23:38.812 slat (nsec): min=6915, max=45117, avg=12581.64, stdev=4723.65 00:23:38.812 clat (usec): min=7767, max=17950, avg=13446.59, stdev=1189.15 00:23:38.812 lat (usec): min=7782, max=17977, avg=13459.17, stdev=1189.41 00:23:38.812 clat percentiles (usec): 00:23:38.812 | 1.00th=[11076], 5.00th=[11731], 10.00th=[12125], 20.00th=[12518], 00:23:38.812 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13698], 00:23:38.812 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15533], 00:23:38.812 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:23:38.812 | 99.99th=[17957] 00:23:38.812 bw ( KiB/s): min=26112, max=29952, per=34.30%, avg=28658.53, stdev=1110.89, samples=19 00:23:38.812 iops : min= 204, max= 234, avg=223.89, stdev= 8.68, samples=19 00:23:38.812 lat (msec) : 10=0.58%, 20=99.42% 00:23:38.812 cpu : usr=91.43%, sys=7.18%, ctx=27, majf=0, minf=9 00:23:38.812 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:38.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.812 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.812 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:38.812 filename0: (groupid=0, jobs=1): err= 0: pid=97919: Mon Jul 15 15:46:31 2024 00:23:38.812 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(223MiB/10007msec) 00:23:38.812 slat (usec): min=4, max=344, avg=13.92, stdev=12.57 00:23:38.812 clat (usec): min=7716, max=20400, avg=16789.26, stdev=1172.52 00:23:38.812 lat (usec): min=7728, max=20416, avg=16803.19, stdev=1173.96 00:23:38.812 clat percentiles (usec): 00:23:38.812 | 1.00th=[14222], 5.00th=[15139], 10.00th=[15533], 20.00th=[15926], 00:23:38.812 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16712], 60.00th=[17171], 00:23:38.812 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[18744], 00:23:38.812 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:23:38.812 | 99.99th=[20317] 00:23:38.812 bw ( KiB/s): min=20992, max=23808, per=27.43%, avg=22916.32, stdev=694.82, samples=19 00:23:38.812 iops : min= 164, max= 186, avg=179.00, stdev= 5.43, samples=19 00:23:38.812 lat (msec) : 10=0.06%, 20=99.61%, 50=0.34% 00:23:38.812 cpu : usr=92.78%, sys=5.61%, ctx=204, majf=0, minf=9 00:23:38.812 IO depths : 1=4.1%, 2=95.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:38.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.812 issued rwts: total=1786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.812 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:38.812 00:23:38.812 Run status group 0 (all jobs): 00:23:38.812 READ: bw=81.6MiB/s (85.6MB/s), 22.3MiB/s-31.6MiB/s (23.4MB/s-33.2MB/s), io=820MiB (860MB), run=10006-10048msec 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:38.812 ************************************ 00:23:38.812 END TEST fio_dif_digest 00:23:38.812 ************************************ 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.812 00:23:38.812 real 0m10.940s 00:23:38.812 user 0m28.353s 00:23:38.812 sys 0m2.130s 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.812 15:46:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:38.812 15:46:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:38.812 15:46:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.812 rmmod nvme_tcp 00:23:38.812 rmmod nvme_fabrics 00:23:38.812 rmmod nvme_keyring 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97173 ']' 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97173 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97173 ']' 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97173 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97173 00:23:38.812 killing process with pid 97173 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97173' 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97173 00:23:38.812 15:46:32 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97173 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:38.812 15:46:32 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:38.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:38.812 Waiting for block devices as requested 00:23:38.812 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:38.812 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:38.812 15:46:33 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:38.812 15:46:33 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:38.812 15:46:33 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.812 15:46:33 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:38.812 15:46:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.812 15:46:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:38.812 15:46:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.812 15:46:33 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:38.812 00:23:38.812 real 0m58.600s 00:23:38.812 user 3m49.921s 00:23:38.812 sys 0m15.039s 00:23:38.812 15:46:33 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.812 15:46:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:38.812 ************************************ 00:23:38.812 END TEST nvmf_dif 00:23:38.812 ************************************ 00:23:38.812 15:46:33 -- common/autotest_common.sh@1142 -- # return 0 00:23:38.812 15:46:33 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:38.812 15:46:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:38.812 15:46:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.812 15:46:33 -- common/autotest_common.sh@10 -- # set +x 00:23:38.812 ************************************ 00:23:38.812 START TEST nvmf_abort_qd_sizes 00:23:38.812 ************************************ 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:38.812 * Looking for test storage... 00:23:38.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.812 15:46:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:38.813 Cannot find device "nvmf_tgt_br" 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:38.813 Cannot find device "nvmf_tgt_br2" 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:38.813 Cannot find device "nvmf_tgt_br" 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:38.813 Cannot find device "nvmf_tgt_br2" 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:38.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:38.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:38.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:23:38.813 00:23:38.813 --- 10.0.0.2 ping statistics --- 00:23:38.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.813 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:38.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:38.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:23:38.813 00:23:38.813 --- 10.0.0.3 ping statistics --- 00:23:38.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.813 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:38.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:23:38.813 00:23:38.813 --- 10.0.0.1 ping statistics --- 00:23:38.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.813 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:38.813 15:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:39.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:39.331 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:39.331 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=98509 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 98509 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 98509 ']' 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.331 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:39.590 [2024-07-15 15:46:34.488024] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:23:39.590 [2024-07-15 15:46:34.488112] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.590 [2024-07-15 15:46:34.629203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.590 [2024-07-15 15:46:34.685471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.590 [2024-07-15 15:46:34.685578] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.590 [2024-07-15 15:46:34.685591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.590 [2024-07-15 15:46:34.685600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.590 [2024-07-15 15:46:34.685607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.590 [2024-07-15 15:46:34.685792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.590 [2024-07-15 15:46:34.686506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.590 [2024-07-15 15:46:34.686644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.590 [2024-07-15 15:46:34.686658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:23:39.849 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.850 15:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:39.850 ************************************ 00:23:39.850 START TEST spdk_target_abort 00:23:39.850 ************************************ 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.850 spdk_targetn1 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.850 [2024-07-15 15:46:34.926960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.850 [2024-07-15 15:46:34.955090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:39.850 15:46:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:43.138 Initializing NVMe Controllers 00:23:43.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:43.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:43.138 Initialization complete. Launching workers. 00:23:43.138 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10752, failed: 0 00:23:43.138 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1066, failed to submit 9686 00:23:43.138 success 796, unsuccess 270, failed 0 00:23:43.138 15:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:43.138 15:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:46.431 Initializing NVMe Controllers 00:23:46.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:46.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:46.431 Initialization complete. Launching workers. 00:23:46.431 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6007, failed: 0 00:23:46.431 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1249, failed to submit 4758 00:23:46.431 success 260, unsuccess 989, failed 0 00:23:46.431 15:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:46.431 15:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:49.714 Initializing NVMe Controllers 00:23:49.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:49.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:49.714 Initialization complete. Launching workers. 00:23:49.714 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30587, failed: 0 00:23:49.714 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2756, failed to submit 27831 00:23:49.714 success 429, unsuccess 2327, failed 0 00:23:49.714 15:46:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:49.714 15:46:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.714 15:46:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:49.714 15:46:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.714 15:46:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:49.714 15:46:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.714 15:46:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:50.281 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.281 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98509 00:23:50.281 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 98509 ']' 00:23:50.281 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 98509 00:23:50.281 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:23:50.281 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.281 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98509 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:50.540 killing process with pid 98509 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98509' 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 98509 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 98509 00:23:50.540 00:23:50.540 real 0m10.739s 00:23:50.540 user 0m41.323s 00:23:50.540 sys 0m1.659s 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:50.540 ************************************ 00:23:50.540 END TEST spdk_target_abort 00:23:50.540 ************************************ 00:23:50.540 15:46:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:50.540 15:46:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:50.540 15:46:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:50.540 15:46:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.540 15:46:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:50.540 ************************************ 00:23:50.540 START TEST kernel_target_abort 00:23:50.540 ************************************ 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:23:50.540 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:50.541 15:46:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:51.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:51.109 Waiting for block devices as requested 00:23:51.109 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:51.109 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:51.109 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:51.109 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:51.109 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:51.109 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:51.109 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:51.109 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:51.109 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:51.109 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:51.109 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:51.370 No valid GPT data, bailing 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:51.370 No valid GPT data, bailing 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:51.370 No valid GPT data, bailing 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:51.370 No valid GPT data, bailing 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:23:51.370 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb --hostid=a562778a-75ef-4aa2-a26b-1faf0d9483fb -a 10.0.0.1 -t tcp -s 4420 00:23:51.647 00:23:51.647 Discovery Log Number of Records 2, Generation counter 2 00:23:51.647 =====Discovery Log Entry 0====== 00:23:51.647 trtype: tcp 00:23:51.647 adrfam: ipv4 00:23:51.647 subtype: current discovery subsystem 00:23:51.647 treq: not specified, sq flow control disable supported 00:23:51.647 portid: 1 00:23:51.647 trsvcid: 4420 00:23:51.647 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:51.647 traddr: 10.0.0.1 00:23:51.647 eflags: none 00:23:51.647 sectype: none 00:23:51.647 =====Discovery Log Entry 1====== 00:23:51.647 trtype: tcp 00:23:51.647 adrfam: ipv4 00:23:51.647 subtype: nvme subsystem 00:23:51.647 treq: not specified, sq flow control disable supported 00:23:51.647 portid: 1 00:23:51.647 trsvcid: 4420 00:23:51.647 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:51.647 traddr: 10.0.0.1 00:23:51.647 eflags: none 00:23:51.647 sectype: none 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:51.647 15:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:54.943 Initializing NVMe Controllers 00:23:54.943 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:54.943 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:54.943 Initialization complete. Launching workers. 00:23:54.943 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32204, failed: 0 00:23:54.944 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32204, failed to submit 0 00:23:54.944 success 0, unsuccess 32204, failed 0 00:23:54.944 15:46:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:54.944 15:46:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:58.226 Initializing NVMe Controllers 00:23:58.226 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:58.226 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:58.226 Initialization complete. Launching workers. 00:23:58.226 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68257, failed: 0 00:23:58.226 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28350, failed to submit 39907 00:23:58.226 success 0, unsuccess 28350, failed 0 00:23:58.226 15:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:58.226 15:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:01.504 Initializing NVMe Controllers 00:24:01.504 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:01.504 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:01.504 Initialization complete. Launching workers. 00:24:01.504 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74645, failed: 0 00:24:01.504 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18610, failed to submit 56035 00:24:01.504 success 0, unsuccess 18610, failed 0 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:01.504 15:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:01.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:03.666 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:03.666 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:03.666 ************************************ 00:24:03.666 END TEST kernel_target_abort 00:24:03.666 ************************************ 00:24:03.666 00:24:03.666 real 0m12.859s 00:24:03.666 user 0m5.964s 00:24:03.666 sys 0m4.162s 00:24:03.666 15:46:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:03.666 15:46:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.666 rmmod nvme_tcp 00:24:03.666 rmmod nvme_fabrics 00:24:03.666 rmmod nvme_keyring 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:03.666 Process with pid 98509 is not found 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 98509 ']' 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 98509 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 98509 ']' 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 98509 00:24:03.666 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (98509) - No such process 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 98509 is not found' 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:03.666 15:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:03.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:03.925 Waiting for block devices as requested 00:24:03.925 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.184 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:04.184 ************************************ 00:24:04.184 END TEST nvmf_abort_qd_sizes 00:24:04.184 ************************************ 00:24:04.184 00:24:04.184 real 0m26.064s 00:24:04.184 user 0m48.256s 00:24:04.184 sys 0m7.115s 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.184 15:46:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:04.184 15:46:59 -- common/autotest_common.sh@1142 -- # return 0 00:24:04.184 15:46:59 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:04.184 15:46:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:04.184 15:46:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.184 15:46:59 -- common/autotest_common.sh@10 -- # set +x 00:24:04.184 ************************************ 00:24:04.184 START TEST keyring_file 00:24:04.184 ************************************ 00:24:04.184 15:46:59 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:04.443 * Looking for test storage... 00:24:04.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:04.443 15:46:59 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:04.443 15:46:59 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:04.443 15:46:59 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.443 15:46:59 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.443 15:46:59 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.443 15:46:59 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.443 15:46:59 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.443 15:46:59 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.443 15:46:59 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:04.443 15:46:59 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.443 15:46:59 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.443 15:46:59 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:04.443 15:46:59 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:04.443 15:46:59 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:04.443 15:46:59 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:04.443 15:46:59 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:04.444 15:46:59 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:04.444 15:46:59 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gA6iNc3NPb 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gA6iNc3NPb 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gA6iNc3NPb 00:24:04.444 15:46:59 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.gA6iNc3NPb 00:24:04.444 15:46:59 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Y6ecuXww8j 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:04.444 15:46:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Y6ecuXww8j 00:24:04.444 15:46:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Y6ecuXww8j 00:24:04.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.444 15:46:59 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Y6ecuXww8j 00:24:04.444 15:46:59 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:04.444 15:46:59 keyring_file -- keyring/file.sh@30 -- # tgtpid=99371 00:24:04.444 15:46:59 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99371 00:24:04.444 15:46:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99371 ']' 00:24:04.444 15:46:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.444 15:46:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.444 15:46:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.444 15:46:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.444 15:46:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:04.444 [2024-07-15 15:46:59.532717] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:04.444 [2024-07-15 15:46:59.533687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99371 ] 00:24:04.703 [2024-07-15 15:46:59.678920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.703 [2024-07-15 15:46:59.749049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:05.641 15:47:00 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:05.641 [2024-07-15 15:47:00.564240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.641 null0 00:24:05.641 [2024-07-15 15:47:00.596372] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.641 [2024-07-15 15:47:00.597055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:05.641 [2024-07-15 15:47:00.604413] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.641 15:47:00 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.641 15:47:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:05.641 [2024-07-15 15:47:00.624467] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:05.641 2024/07/15 15:47:00 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:24:05.641 request: 00:24:05.641 { 00:24:05.641 "method": "nvmf_subsystem_add_listener", 00:24:05.641 "params": { 00:24:05.641 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.641 "secure_channel": false, 00:24:05.642 "listen_address": { 00:24:05.642 "trtype": "tcp", 00:24:05.642 "traddr": "127.0.0.1", 00:24:05.642 "trsvcid": "4420" 00:24:05.642 } 00:24:05.642 } 00:24:05.642 } 00:24:05.642 Got JSON-RPC error response 00:24:05.642 GoRPCClient: error on JSON-RPC call 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:05.642 15:47:00 keyring_file -- keyring/file.sh@46 -- # bperfpid=99406 00:24:05.642 15:47:00 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:05.642 15:47:00 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99406 /var/tmp/bperf.sock 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99406 ']' 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:05.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.642 15:47:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:05.642 [2024-07-15 15:47:00.717152] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:05.642 [2024-07-15 15:47:00.717938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99406 ] 00:24:05.901 [2024-07-15 15:47:00.867026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.901 [2024-07-15 15:47:00.942342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.901 15:47:01 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.901 15:47:01 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:05.901 15:47:01 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gA6iNc3NPb 00:24:05.901 15:47:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gA6iNc3NPb 00:24:06.158 15:47:01 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Y6ecuXww8j 00:24:06.158 15:47:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Y6ecuXww8j 00:24:06.723 15:47:01 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:06.723 15:47:01 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:06.723 15:47:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:06.723 15:47:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:06.723 15:47:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:06.723 15:47:01 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.gA6iNc3NPb == \/\t\m\p\/\t\m\p\.\g\A\6\i\N\c\3\N\P\b ]] 00:24:06.723 15:47:01 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:06.723 15:47:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:06.723 15:47:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:06.724 15:47:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:06.724 15:47:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:06.986 15:47:02 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Y6ecuXww8j == \/\t\m\p\/\t\m\p\.\Y\6\e\c\u\X\w\w\8\j ]] 00:24:06.986 15:47:02 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:06.986 15:47:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:06.986 15:47:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:06.986 15:47:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:06.986 15:47:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:06.986 15:47:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.243 15:47:02 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:07.243 15:47:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:07.243 15:47:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.243 15:47:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:07.243 15:47:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:07.243 15:47:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.243 15:47:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.502 15:47:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:07.502 15:47:02 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:07.502 15:47:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:07.760 [2024-07-15 15:47:02.769407] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.760 nvme0n1 00:24:07.760 15:47:02 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:07.760 15:47:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:07.760 15:47:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.760 15:47:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.760 15:47:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:07.760 15:47:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.019 15:47:03 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:08.019 15:47:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:08.019 15:47:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:08.019 15:47:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:08.019 15:47:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:08.019 15:47:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:08.019 15:47:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.590 15:47:03 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:08.590 15:47:03 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:08.590 Running I/O for 1 seconds... 00:24:09.524 00:24:09.524 Latency(us) 00:24:09.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.524 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:09.524 nvme0n1 : 1.01 10259.21 40.08 0.00 0.00 12424.05 6791.91 25499.46 00:24:09.524 =================================================================================================================== 00:24:09.524 Total : 10259.21 40.08 0.00 0.00 12424.05 6791.91 25499.46 00:24:09.524 0 00:24:09.524 15:47:04 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:09.524 15:47:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:09.782 15:47:04 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:09.782 15:47:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:09.782 15:47:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:09.782 15:47:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:09.782 15:47:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:09.782 15:47:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.040 15:47:05 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:10.040 15:47:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:10.040 15:47:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:10.040 15:47:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:10.040 15:47:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.040 15:47:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:10.040 15:47:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.298 15:47:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:10.298 15:47:05 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.298 15:47:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:10.298 15:47:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.298 15:47:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:10.298 15:47:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:10.298 15:47:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:10.298 15:47:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:10.298 15:47:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.298 15:47:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.557 [2024-07-15 15:47:05.633555] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:10.557 [2024-07-15 15:47:05.634329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed0f30 (107): Transport endpoint is not connected 00:24:10.557 [2024-07-15 15:47:05.635319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed0f30 (9): Bad file descriptor 00:24:10.557 [2024-07-15 15:47:05.636315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:10.557 [2024-07-15 15:47:05.636336] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:10.557 [2024-07-15 15:47:05.636346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:10.557 2024/07/15 15:47:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:10.557 request: 00:24:10.557 { 00:24:10.557 "method": "bdev_nvme_attach_controller", 00:24:10.557 "params": { 00:24:10.557 "name": "nvme0", 00:24:10.557 "trtype": "tcp", 00:24:10.557 "traddr": "127.0.0.1", 00:24:10.557 "adrfam": "ipv4", 00:24:10.557 "trsvcid": "4420", 00:24:10.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:10.557 "prchk_reftag": false, 00:24:10.557 "prchk_guard": false, 00:24:10.557 "hdgst": false, 00:24:10.557 "ddgst": false, 00:24:10.557 "psk": "key1" 00:24:10.557 } 00:24:10.557 } 00:24:10.557 Got JSON-RPC error response 00:24:10.557 GoRPCClient: error on JSON-RPC call 00:24:10.557 15:47:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:10.557 15:47:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:10.557 15:47:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:10.557 15:47:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:10.557 15:47:05 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:10.557 15:47:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:10.557 15:47:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:10.557 15:47:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.557 15:47:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.557 15:47:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:10.816 15:47:05 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:10.816 15:47:05 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:10.816 15:47:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:10.816 15:47:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:10.816 15:47:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:10.816 15:47:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.816 15:47:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.384 15:47:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:11.384 15:47:06 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:11.384 15:47:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:11.384 15:47:06 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:11.384 15:47:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:11.644 15:47:06 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:11.644 15:47:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.644 15:47:06 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:11.904 15:47:06 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:11.904 15:47:06 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.gA6iNc3NPb 00:24:11.904 15:47:06 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.gA6iNc3NPb 00:24:11.904 15:47:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:11.904 15:47:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.gA6iNc3NPb 00:24:11.904 15:47:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:11.904 15:47:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.904 15:47:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:11.904 15:47:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.904 15:47:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gA6iNc3NPb 00:24:11.904 15:47:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gA6iNc3NPb 00:24:12.163 [2024-07-15 15:47:07.245422] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gA6iNc3NPb': 0100660 00:24:12.164 [2024-07-15 15:47:07.245465] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:12.164 2024/07/15 15:47:07 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.gA6iNc3NPb], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:24:12.164 request: 00:24:12.164 { 00:24:12.164 "method": "keyring_file_add_key", 00:24:12.164 "params": { 00:24:12.164 "name": "key0", 00:24:12.164 "path": "/tmp/tmp.gA6iNc3NPb" 00:24:12.164 } 00:24:12.164 } 00:24:12.164 Got JSON-RPC error response 00:24:12.164 GoRPCClient: error on JSON-RPC call 00:24:12.164 15:47:07 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:12.164 15:47:07 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:12.164 15:47:07 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:12.164 15:47:07 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:12.164 15:47:07 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.gA6iNc3NPb 00:24:12.164 15:47:07 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gA6iNc3NPb 00:24:12.164 15:47:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gA6iNc3NPb 00:24:12.733 15:47:07 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.gA6iNc3NPb 00:24:12.733 15:47:07 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:12.733 15:47:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:12.733 15:47:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:12.733 15:47:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:12.733 15:47:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:12.733 15:47:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:12.733 15:47:07 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:12.733 15:47:07 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:12.733 15:47:07 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:12.733 15:47:07 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:12.733 15:47:07 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:12.733 15:47:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.733 15:47:07 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:12.733 15:47:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.733 15:47:07 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:12.733 15:47:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:12.992 [2024-07-15 15:47:08.117651] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.gA6iNc3NPb': No such file or directory 00:24:12.992 [2024-07-15 15:47:08.117696] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:12.992 [2024-07-15 15:47:08.117727] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:12.992 [2024-07-15 15:47:08.117736] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:12.992 [2024-07-15 15:47:08.117744] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:12.992 2024/07/15 15:47:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:24:13.250 request: 00:24:13.250 { 00:24:13.250 "method": "bdev_nvme_attach_controller", 00:24:13.250 "params": { 00:24:13.250 "name": "nvme0", 00:24:13.250 "trtype": "tcp", 00:24:13.250 "traddr": "127.0.0.1", 00:24:13.250 "adrfam": "ipv4", 00:24:13.250 "trsvcid": "4420", 00:24:13.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:13.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:13.251 "prchk_reftag": false, 00:24:13.251 "prchk_guard": false, 00:24:13.251 "hdgst": false, 00:24:13.251 "ddgst": false, 00:24:13.251 "psk": "key0" 00:24:13.251 } 00:24:13.251 } 00:24:13.251 Got JSON-RPC error response 00:24:13.251 GoRPCClient: error on JSON-RPC call 00:24:13.251 15:47:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:13.251 15:47:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:13.251 15:47:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:13.251 15:47:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:13.251 15:47:08 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:13.251 15:47:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:13.509 15:47:08 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:13.509 15:47:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:13.509 15:47:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:13.509 15:47:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:13.509 15:47:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:13.509 15:47:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:13.509 15:47:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pzXMVTnvKA 00:24:13.509 15:47:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:13.509 15:47:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:13.509 15:47:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:13.509 15:47:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:13.509 15:47:08 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:13.509 15:47:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:13.509 15:47:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:13.509 15:47:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pzXMVTnvKA 00:24:13.509 15:47:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pzXMVTnvKA 00:24:13.509 15:47:08 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.pzXMVTnvKA 00:24:13.509 15:47:08 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pzXMVTnvKA 00:24:13.510 15:47:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pzXMVTnvKA 00:24:13.769 15:47:08 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.769 15:47:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:14.027 nvme0n1 00:24:14.027 15:47:09 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:14.027 15:47:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:14.027 15:47:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:14.027 15:47:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.027 15:47:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:14.027 15:47:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.286 15:47:09 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:14.286 15:47:09 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:14.286 15:47:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:14.545 15:47:09 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:14.545 15:47:09 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:14.545 15:47:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:14.545 15:47:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.545 15:47:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.804 15:47:09 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:14.804 15:47:09 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:14.804 15:47:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:14.804 15:47:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:14.804 15:47:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.804 15:47:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.804 15:47:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:15.064 15:47:10 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:15.064 15:47:10 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:15.064 15:47:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:15.322 15:47:10 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:15.322 15:47:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:15.322 15:47:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.581 15:47:10 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:15.581 15:47:10 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pzXMVTnvKA 00:24:15.581 15:47:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pzXMVTnvKA 00:24:15.840 15:47:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Y6ecuXww8j 00:24:15.840 15:47:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Y6ecuXww8j 00:24:16.098 15:47:11 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.098 15:47:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.356 nvme0n1 00:24:16.356 15:47:11 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:16.356 15:47:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:16.924 15:47:11 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:16.924 "subsystems": [ 00:24:16.924 { 00:24:16.924 "subsystem": "keyring", 00:24:16.924 "config": [ 00:24:16.924 { 00:24:16.924 "method": "keyring_file_add_key", 00:24:16.924 "params": { 00:24:16.924 "name": "key0", 00:24:16.924 "path": "/tmp/tmp.pzXMVTnvKA" 00:24:16.924 } 00:24:16.924 }, 00:24:16.924 { 00:24:16.924 "method": "keyring_file_add_key", 00:24:16.924 "params": { 00:24:16.924 "name": "key1", 00:24:16.924 "path": "/tmp/tmp.Y6ecuXww8j" 00:24:16.924 } 00:24:16.924 } 00:24:16.924 ] 00:24:16.924 }, 00:24:16.924 { 00:24:16.924 "subsystem": "iobuf", 00:24:16.924 "config": [ 00:24:16.924 { 00:24:16.924 "method": "iobuf_set_options", 00:24:16.924 "params": { 00:24:16.924 "large_bufsize": 135168, 00:24:16.924 "large_pool_count": 1024, 00:24:16.924 "small_bufsize": 8192, 00:24:16.924 "small_pool_count": 8192 00:24:16.924 } 00:24:16.924 } 00:24:16.924 ] 00:24:16.924 }, 00:24:16.924 { 00:24:16.924 "subsystem": "sock", 00:24:16.924 "config": [ 00:24:16.924 { 00:24:16.924 "method": "sock_set_default_impl", 00:24:16.924 "params": { 00:24:16.924 "impl_name": "posix" 00:24:16.924 } 00:24:16.924 }, 00:24:16.924 { 00:24:16.924 "method": "sock_impl_set_options", 00:24:16.924 "params": { 00:24:16.924 "enable_ktls": false, 00:24:16.924 "enable_placement_id": 0, 00:24:16.924 "enable_quickack": false, 00:24:16.924 "enable_recv_pipe": true, 00:24:16.924 "enable_zerocopy_send_client": false, 00:24:16.924 "enable_zerocopy_send_server": true, 00:24:16.924 "impl_name": "ssl", 00:24:16.924 "recv_buf_size": 4096, 00:24:16.924 "send_buf_size": 4096, 00:24:16.924 "tls_version": 0, 00:24:16.924 "zerocopy_threshold": 0 00:24:16.924 } 00:24:16.924 }, 00:24:16.924 { 00:24:16.924 "method": "sock_impl_set_options", 00:24:16.924 "params": { 00:24:16.924 "enable_ktls": false, 00:24:16.924 "enable_placement_id": 0, 00:24:16.924 "enable_quickack": false, 00:24:16.924 "enable_recv_pipe": true, 00:24:16.924 "enable_zerocopy_send_client": false, 00:24:16.924 "enable_zerocopy_send_server": true, 00:24:16.924 "impl_name": "posix", 00:24:16.924 "recv_buf_size": 2097152, 00:24:16.924 "send_buf_size": 2097152, 00:24:16.924 "tls_version": 0, 00:24:16.925 "zerocopy_threshold": 0 00:24:16.925 } 00:24:16.925 } 00:24:16.925 ] 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "subsystem": "vmd", 00:24:16.925 "config": [] 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "subsystem": "accel", 00:24:16.925 "config": [ 00:24:16.925 { 00:24:16.925 "method": "accel_set_options", 00:24:16.925 "params": { 00:24:16.925 "buf_count": 2048, 00:24:16.925 "large_cache_size": 16, 00:24:16.925 "sequence_count": 2048, 00:24:16.925 "small_cache_size": 128, 00:24:16.925 "task_count": 2048 00:24:16.925 } 00:24:16.925 } 00:24:16.925 ] 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "subsystem": "bdev", 00:24:16.925 "config": [ 00:24:16.925 { 00:24:16.925 "method": "bdev_set_options", 00:24:16.925 "params": { 00:24:16.925 "bdev_auto_examine": true, 00:24:16.925 "bdev_io_cache_size": 256, 00:24:16.925 "bdev_io_pool_size": 65535, 00:24:16.925 "iobuf_large_cache_size": 16, 00:24:16.925 "iobuf_small_cache_size": 128 00:24:16.925 } 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "method": "bdev_raid_set_options", 00:24:16.925 "params": { 00:24:16.925 "process_window_size_kb": 1024 00:24:16.925 } 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "method": "bdev_iscsi_set_options", 00:24:16.925 "params": { 00:24:16.925 "timeout_sec": 30 00:24:16.925 } 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "method": "bdev_nvme_set_options", 00:24:16.925 "params": { 00:24:16.925 "action_on_timeout": "none", 00:24:16.925 "allow_accel_sequence": false, 00:24:16.925 "arbitration_burst": 0, 00:24:16.925 "bdev_retry_count": 3, 00:24:16.925 "ctrlr_loss_timeout_sec": 0, 00:24:16.925 "delay_cmd_submit": true, 00:24:16.925 "dhchap_dhgroups": [ 00:24:16.925 "null", 00:24:16.925 "ffdhe2048", 00:24:16.925 "ffdhe3072", 00:24:16.925 "ffdhe4096", 00:24:16.925 "ffdhe6144", 00:24:16.925 "ffdhe8192" 00:24:16.925 ], 00:24:16.925 "dhchap_digests": [ 00:24:16.925 "sha256", 00:24:16.925 "sha384", 00:24:16.925 "sha512" 00:24:16.925 ], 00:24:16.925 "disable_auto_failback": false, 00:24:16.925 "fast_io_fail_timeout_sec": 0, 00:24:16.925 "generate_uuids": false, 00:24:16.925 "high_priority_weight": 0, 00:24:16.925 "io_path_stat": false, 00:24:16.925 "io_queue_requests": 512, 00:24:16.925 "keep_alive_timeout_ms": 10000, 00:24:16.925 "low_priority_weight": 0, 00:24:16.925 "medium_priority_weight": 0, 00:24:16.925 "nvme_adminq_poll_period_us": 10000, 00:24:16.925 "nvme_error_stat": false, 00:24:16.925 "nvme_ioq_poll_period_us": 0, 00:24:16.925 "rdma_cm_event_timeout_ms": 0, 00:24:16.925 "rdma_max_cq_size": 0, 00:24:16.925 "rdma_srq_size": 0, 00:24:16.925 "reconnect_delay_sec": 0, 00:24:16.925 "timeout_admin_us": 0, 00:24:16.925 "timeout_us": 0, 00:24:16.925 "transport_ack_timeout": 0, 00:24:16.925 "transport_retry_count": 4, 00:24:16.925 "transport_tos": 0 00:24:16.925 } 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "method": "bdev_nvme_attach_controller", 00:24:16.925 "params": { 00:24:16.925 "adrfam": "IPv4", 00:24:16.925 "ctrlr_loss_timeout_sec": 0, 00:24:16.925 "ddgst": false, 00:24:16.925 "fast_io_fail_timeout_sec": 0, 00:24:16.925 "hdgst": false, 00:24:16.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:16.925 "name": "nvme0", 00:24:16.925 "prchk_guard": false, 00:24:16.925 "prchk_reftag": false, 00:24:16.925 "psk": "key0", 00:24:16.925 "reconnect_delay_sec": 0, 00:24:16.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.925 "traddr": "127.0.0.1", 00:24:16.925 "trsvcid": "4420", 00:24:16.925 "trtype": "TCP" 00:24:16.925 } 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "method": "bdev_nvme_set_hotplug", 00:24:16.925 "params": { 00:24:16.925 "enable": false, 00:24:16.925 "period_us": 100000 00:24:16.925 } 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "method": "bdev_wait_for_examine" 00:24:16.925 } 00:24:16.925 ] 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "subsystem": "nbd", 00:24:16.925 "config": [] 00:24:16.925 } 00:24:16.925 ] 00:24:16.925 }' 00:24:16.925 15:47:11 keyring_file -- keyring/file.sh@114 -- # killprocess 99406 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99406 ']' 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99406 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99406 00:24:16.925 killing process with pid 99406 00:24:16.925 Received shutdown signal, test time was about 1.000000 seconds 00:24:16.925 00:24:16.925 Latency(us) 00:24:16.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.925 =================================================================================================================== 00:24:16.925 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99406' 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@967 -- # kill 99406 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@972 -- # wait 99406 00:24:16.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:16.925 15:47:11 keyring_file -- keyring/file.sh@117 -- # bperfpid=99864 00:24:16.925 15:47:11 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:16.925 15:47:11 keyring_file -- keyring/file.sh@119 -- # waitforlisten 99864 /var/tmp/bperf.sock 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99864 ']' 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.925 15:47:11 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:16.925 15:47:11 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:16.925 "subsystems": [ 00:24:16.925 { 00:24:16.925 "subsystem": "keyring", 00:24:16.925 "config": [ 00:24:16.925 { 00:24:16.925 "method": "keyring_file_add_key", 00:24:16.925 "params": { 00:24:16.925 "name": "key0", 00:24:16.925 "path": "/tmp/tmp.pzXMVTnvKA" 00:24:16.925 } 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "method": "keyring_file_add_key", 00:24:16.925 "params": { 00:24:16.925 "name": "key1", 00:24:16.925 "path": "/tmp/tmp.Y6ecuXww8j" 00:24:16.925 } 00:24:16.925 } 00:24:16.925 ] 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "subsystem": "iobuf", 00:24:16.925 "config": [ 00:24:16.925 { 00:24:16.925 "method": "iobuf_set_options", 00:24:16.925 "params": { 00:24:16.925 "large_bufsize": 135168, 00:24:16.925 "large_pool_count": 1024, 00:24:16.925 "small_bufsize": 8192, 00:24:16.925 "small_pool_count": 8192 00:24:16.925 } 00:24:16.925 } 00:24:16.925 ] 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "subsystem": "sock", 00:24:16.925 "config": [ 00:24:16.925 { 00:24:16.925 "method": "sock_set_default_impl", 00:24:16.925 "params": { 00:24:16.925 "impl_name": "posix" 00:24:16.925 } 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "method": "sock_impl_set_options", 00:24:16.925 "params": { 00:24:16.925 "enable_ktls": false, 00:24:16.925 "enable_placement_id": 0, 00:24:16.925 "enable_quickack": false, 00:24:16.925 "enable_recv_pipe": true, 00:24:16.925 "enable_zerocopy_send_client": false, 00:24:16.925 "enable_zerocopy_send_server": true, 00:24:16.925 "impl_name": "ssl", 00:24:16.925 "recv_buf_size": 4096, 00:24:16.925 "send_buf_size": 4096, 00:24:16.925 "tls_version": 0, 00:24:16.925 "zerocopy_threshold": 0 00:24:16.925 } 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "method": "sock_impl_set_options", 00:24:16.925 "params": { 00:24:16.925 "enable_ktls": false, 00:24:16.925 "enable_placement_id": 0, 00:24:16.925 "enable_quickack": false, 00:24:16.925 "enable_recv_pipe": true, 00:24:16.925 "enable_zerocopy_send_client": false, 00:24:16.925 "enable_zerocopy_send_server": true, 00:24:16.925 "impl_name": "posix", 00:24:16.925 "recv_buf_size": 2097152, 00:24:16.925 "send_buf_size": 2097152, 00:24:16.925 "tls_version": 0, 00:24:16.925 "zerocopy_threshold": 0 00:24:16.925 } 00:24:16.925 } 00:24:16.925 ] 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "subsystem": "vmd", 00:24:16.925 "config": [] 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "subsystem": "accel", 00:24:16.925 "config": [ 00:24:16.925 { 00:24:16.925 "method": "accel_set_options", 00:24:16.925 "params": { 00:24:16.925 "buf_count": 2048, 00:24:16.925 "large_cache_size": 16, 00:24:16.925 "sequence_count": 2048, 00:24:16.925 "small_cache_size": 128, 00:24:16.925 "task_count": 2048 00:24:16.925 } 00:24:16.925 } 00:24:16.925 ] 00:24:16.925 }, 00:24:16.925 { 00:24:16.925 "subsystem": "bdev", 00:24:16.925 "config": [ 00:24:16.925 { 00:24:16.925 "method": "bdev_set_options", 00:24:16.925 "params": { 00:24:16.925 "bdev_auto_examine": true, 00:24:16.925 "bdev_io_cache_size": 256, 00:24:16.925 "bdev_io_pool_size": 65535, 00:24:16.926 "iobuf_large_cache_size": 16, 00:24:16.926 "iobuf_small_cache_size": 128 00:24:16.926 } 00:24:16.926 }, 00:24:16.926 { 00:24:16.926 "method": "bdev_raid_set_options", 00:24:16.926 "params": { 00:24:16.926 "process_window_size_kb": 1024 00:24:16.926 } 00:24:16.926 }, 00:24:16.926 { 00:24:16.926 "method": "bdev_iscsi_set_options", 00:24:16.926 "params": { 00:24:16.926 "timeout_sec": 30 00:24:16.926 } 00:24:16.926 }, 00:24:16.926 { 00:24:16.926 "method": "bdev_nvme_set_options", 00:24:16.926 "params": { 00:24:16.926 "action_on_timeout": "none", 00:24:16.926 "allow_accel_sequence": false, 00:24:16.926 "arbitration_burst": 0, 00:24:16.926 "bdev_retry_count": 3, 00:24:16.926 "ctrlr_loss_timeout_sec": 0, 00:24:16.926 "delay_cmd_submit": true, 00:24:16.926 "dhchap_dhgroups": [ 00:24:16.926 "null", 00:24:16.926 "ffdhe2048", 00:24:16.926 "ffdhe3072", 00:24:16.926 "ffdhe4096", 00:24:16.926 "ffdhe6144", 00:24:16.926 "ffdhe8192" 00:24:16.926 ], 00:24:16.926 "dhchap_digests": [ 00:24:16.926 "sha256", 00:24:16.926 "sha384", 00:24:16.926 "sha512" 00:24:16.926 ], 00:24:16.926 "disable_auto_failback": false, 00:24:16.926 "fast_io_fail_timeout_sec": 0, 00:24:16.926 "generate_uuids": false, 00:24:16.926 "high_priority_weight": 0, 00:24:16.926 "io_path_stat": false, 00:24:16.926 "io_queue_requests": 512, 00:24:16.926 "keep_alive_timeout_ms": 10000, 00:24:16.926 "low_priority_weight": 0, 00:24:16.926 "medium_priority_weight": 0, 00:24:16.926 "nvme_adminq_poll_period_us": 10000, 00:24:16.926 "nvme_error_stat": false, 00:24:16.926 "nvme_ioq_poll_period_us": 0, 00:24:16.926 "rdma_cm_event_timeout_ms": 0, 00:24:16.926 "rdma_max_cq_size": 0, 00:24:16.926 "rdma_srq_size": 0, 00:24:16.926 "reconnect_delay_sec": 0, 00:24:16.926 "timeout_admin_us": 0, 00:24:16.926 "timeout_us": 0, 00:24:16.926 "transport_ack_timeout": 0, 00:24:16.926 "transport_retry_count": 4, 00:24:16.926 "transport_tos": 0 00:24:16.926 } 00:24:16.926 }, 00:24:16.926 { 00:24:16.926 "method": "bdev_nvme_attach_controller", 00:24:16.926 "params": { 00:24:16.926 "adrfam": "IPv4", 00:24:16.926 "ctrlr_loss_timeout_sec": 0, 00:24:16.926 "ddgst": false, 00:24:16.926 "fast_io_fail_timeout_sec": 0, 00:24:16.926 "hdgst": false, 00:24:16.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:16.926 "name": "nvme0", 00:24:16.926 "prchk_guard": false, 00:24:16.926 "prchk_reftag": false, 00:24:16.926 "psk": "key0", 00:24:16.926 "reconnect_delay_sec": 0, 00:24:16.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.926 "traddr": "127.0.0.1", 00:24:16.926 "trsvcid": "4420", 00:24:16.926 "trtype": "TCP" 00:24:16.926 } 00:24:16.926 }, 00:24:16.926 { 00:24:16.926 "method": "bdev_nvme_set_hotplug", 00:24:16.926 "params": { 00:24:16.926 "enable": false, 00:24:16.926 "period_us": 100000 00:24:16.926 } 00:24:16.926 }, 00:24:16.926 { 00:24:16.926 "method": "bdev_wait_for_examine" 00:24:16.926 } 00:24:16.926 ] 00:24:16.926 }, 00:24:16.926 { 00:24:16.926 "subsystem": "nbd", 00:24:16.926 "config": [] 00:24:16.926 } 00:24:16.926 ] 00:24:16.926 }' 00:24:16.926 15:47:11 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.926 15:47:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:16.926 [2024-07-15 15:47:12.006310] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:16.926 [2024-07-15 15:47:12.006406] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99864 ] 00:24:17.185 [2024-07-15 15:47:12.140224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.185 [2024-07-15 15:47:12.199643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.443 [2024-07-15 15:47:12.340881] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.444 15:47:12 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.444 15:47:12 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:17.444 15:47:12 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:17.444 15:47:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.444 15:47:12 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:17.702 15:47:12 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:17.702 15:47:12 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:17.702 15:47:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:17.702 15:47:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:17.702 15:47:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:17.702 15:47:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:17.702 15:47:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.961 15:47:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:17.961 15:47:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:17.961 15:47:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:17.961 15:47:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:17.961 15:47:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:17.961 15:47:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.961 15:47:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:18.528 15:47:13 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:18.528 15:47:13 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:18.528 15:47:13 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:18.528 15:47:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:18.528 15:47:13 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:18.528 15:47:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:18.528 15:47:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.pzXMVTnvKA /tmp/tmp.Y6ecuXww8j 00:24:18.528 15:47:13 keyring_file -- keyring/file.sh@20 -- # killprocess 99864 00:24:18.528 15:47:13 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99864 ']' 00:24:18.528 15:47:13 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99864 00:24:18.528 15:47:13 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:18.528 15:47:13 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.528 15:47:13 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99864 00:24:18.786 killing process with pid 99864 00:24:18.786 Received shutdown signal, test time was about 1.000000 seconds 00:24:18.786 00:24:18.786 Latency(us) 00:24:18.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.786 =================================================================================================================== 00:24:18.786 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99864' 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@967 -- # kill 99864 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@972 -- # wait 99864 00:24:18.786 15:47:13 keyring_file -- keyring/file.sh@21 -- # killprocess 99371 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99371 ']' 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99371 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99371 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:18.786 killing process with pid 99371 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99371' 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@967 -- # kill 99371 00:24:18.786 [2024-07-15 15:47:13.845061] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:18.786 15:47:13 keyring_file -- common/autotest_common.sh@972 -- # wait 99371 00:24:19.044 00:24:19.044 real 0m14.836s 00:24:19.044 user 0m37.795s 00:24:19.044 sys 0m3.057s 00:24:19.044 15:47:14 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:19.044 15:47:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:19.044 ************************************ 00:24:19.044 END TEST keyring_file 00:24:19.044 ************************************ 00:24:19.044 15:47:14 -- common/autotest_common.sh@1142 -- # return 0 00:24:19.044 15:47:14 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:24:19.044 15:47:14 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:19.044 15:47:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:19.044 15:47:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.044 15:47:14 -- common/autotest_common.sh@10 -- # set +x 00:24:19.044 ************************************ 00:24:19.044 START TEST keyring_linux 00:24:19.044 ************************************ 00:24:19.044 15:47:14 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:19.303 * Looking for test storage... 00:24:19.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:19.303 15:47:14 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:19.303 15:47:14 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a562778a-75ef-4aa2-a26b-1faf0d9483fb 00:24:19.303 15:47:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:19.304 15:47:14 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.304 15:47:14 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.304 15:47:14 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.304 15:47:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.304 15:47:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.304 15:47:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.304 15:47:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:19.304 15:47:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:19.304 /tmp/:spdk-test:key0 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:19.304 15:47:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:19.304 /tmp/:spdk-test:key1 00:24:19.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.304 15:47:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=99999 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:19.304 15:47:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 99999 00:24:19.304 15:47:14 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 99999 ']' 00:24:19.304 15:47:14 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.304 15:47:14 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.304 15:47:14 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.304 15:47:14 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.304 15:47:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:19.304 [2024-07-15 15:47:14.415880] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:19.304 [2024-07-15 15:47:14.416723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99999 ] 00:24:19.563 [2024-07-15 15:47:14.551587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.563 [2024-07-15 15:47:14.608912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:19.823 15:47:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:19.823 [2024-07-15 15:47:14.777703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.823 null0 00:24:19.823 [2024-07-15 15:47:14.809635] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.823 [2024-07-15 15:47:14.809985] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.823 15:47:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:19.823 46720773 00:24:19.823 15:47:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:19.823 223830853 00:24:19.823 15:47:14 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:19.823 15:47:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100022 00:24:19.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:19.823 15:47:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100022 /var/tmp/bperf.sock 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100022 ']' 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.823 15:47:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:19.823 [2024-07-15 15:47:14.895801] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:19.823 [2024-07-15 15:47:14.896073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100022 ] 00:24:20.082 [2024-07-15 15:47:15.035783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.082 [2024-07-15 15:47:15.107729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.019 15:47:15 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.019 15:47:15 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:21.019 15:47:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:21.019 15:47:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:21.019 15:47:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:21.019 15:47:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:21.586 15:47:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:21.586 15:47:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:21.586 [2024-07-15 15:47:16.656275] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.844 nvme0n1 00:24:21.844 15:47:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:21.844 15:47:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:21.844 15:47:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:21.844 15:47:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:21.844 15:47:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:21.844 15:47:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:22.103 15:47:17 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:22.103 15:47:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:22.103 15:47:17 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:22.103 15:47:17 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:22.103 15:47:17 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.103 15:47:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.103 15:47:17 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:22.361 15:47:17 keyring_linux -- keyring/linux.sh@25 -- # sn=46720773 00:24:22.361 15:47:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:22.361 15:47:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:22.361 15:47:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 46720773 == \4\6\7\2\0\7\7\3 ]] 00:24:22.361 15:47:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 46720773 00:24:22.361 15:47:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:22.361 15:47:17 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:22.361 Running I/O for 1 seconds... 00:24:23.734 00:24:23.734 Latency(us) 00:24:23.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.734 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:23.734 nvme0n1 : 1.01 11800.47 46.10 0.00 0.00 10779.11 7298.33 18111.77 00:24:23.734 =================================================================================================================== 00:24:23.734 Total : 11800.47 46.10 0.00 0.00 10779.11 7298.33 18111.77 00:24:23.734 0 00:24:23.734 15:47:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:23.734 15:47:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:23.734 15:47:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:23.734 15:47:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:23.734 15:47:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:23.734 15:47:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:23.734 15:47:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.734 15:47:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:23.992 15:47:19 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:23.992 15:47:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:23.992 15:47:19 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:23.992 15:47:19 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:23.992 15:47:19 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:24:23.992 15:47:19 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:23.992 15:47:19 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:23.992 15:47:19 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.992 15:47:19 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:23.992 15:47:19 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.992 15:47:19 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:23.992 15:47:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:24.249 [2024-07-15 15:47:19.334208] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:24.249 [2024-07-15 15:47:19.334421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbeea0 (107): Transport endpoint is not connected 00:24:24.249 [2024-07-15 15:47:19.335409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbeea0 (9): Bad file descriptor 00:24:24.249 [2024-07-15 15:47:19.336405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.250 [2024-07-15 15:47:19.336432] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:24.250 [2024-07-15 15:47:19.336443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.250 2024/07/15 15:47:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:24.250 request: 00:24:24.250 { 00:24:24.250 "method": "bdev_nvme_attach_controller", 00:24:24.250 "params": { 00:24:24.250 "name": "nvme0", 00:24:24.250 "trtype": "tcp", 00:24:24.250 "traddr": "127.0.0.1", 00:24:24.250 "adrfam": "ipv4", 00:24:24.250 "trsvcid": "4420", 00:24:24.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:24.250 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:24.250 "prchk_reftag": false, 00:24:24.250 "prchk_guard": false, 00:24:24.250 "hdgst": false, 00:24:24.250 "ddgst": false, 00:24:24.250 "psk": ":spdk-test:key1" 00:24:24.250 } 00:24:24.250 } 00:24:24.250 Got JSON-RPC error response 00:24:24.250 GoRPCClient: error on JSON-RPC call 00:24:24.250 15:47:19 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:24:24.250 15:47:19 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:24.250 15:47:19 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:24.250 15:47:19 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@33 -- # sn=46720773 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 46720773 00:24:24.250 1 links removed 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@33 -- # sn=223830853 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 223830853 00:24:24.250 1 links removed 00:24:24.250 15:47:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100022 00:24:24.250 15:47:19 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100022 ']' 00:24:24.250 15:47:19 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100022 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100022 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100022' 00:24:24.508 killing process with pid 100022 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@967 -- # kill 100022 00:24:24.508 Received shutdown signal, test time was about 1.000000 seconds 00:24:24.508 00:24:24.508 Latency(us) 00:24:24.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.508 =================================================================================================================== 00:24:24.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@972 -- # wait 100022 00:24:24.508 15:47:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 99999 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 99999 ']' 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 99999 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99999 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:24.508 15:47:19 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:24.508 killing process with pid 99999 00:24:24.509 15:47:19 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99999' 00:24:24.509 15:47:19 keyring_linux -- common/autotest_common.sh@967 -- # kill 99999 00:24:24.509 15:47:19 keyring_linux -- common/autotest_common.sh@972 -- # wait 99999 00:24:24.766 ************************************ 00:24:24.766 END TEST keyring_linux 00:24:24.766 ************************************ 00:24:24.766 00:24:24.766 real 0m5.700s 00:24:24.766 user 0m11.847s 00:24:24.766 sys 0m1.481s 00:24:24.766 15:47:19 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:24.766 15:47:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:24.766 15:47:19 -- common/autotest_common.sh@1142 -- # return 0 00:24:24.766 15:47:19 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:24:24.766 15:47:19 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:24:24.766 15:47:19 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:24:24.766 15:47:19 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:24:24.766 15:47:19 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:24:24.766 15:47:19 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:24:24.766 15:47:19 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:24:24.766 15:47:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:24.766 15:47:19 -- common/autotest_common.sh@10 -- # set +x 00:24:24.766 15:47:19 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:24:24.766 15:47:19 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:24:25.023 15:47:19 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:24:25.023 15:47:19 -- common/autotest_common.sh@10 -- # set +x 00:24:26.395 INFO: APP EXITING 00:24:26.395 INFO: killing all VMs 00:24:26.395 INFO: killing vhost app 00:24:26.395 INFO: EXIT DONE 00:24:26.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:27.219 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:27.219 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:27.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:27.786 Cleaning 00:24:27.786 Removing: /var/run/dpdk/spdk0/config 00:24:27.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:27.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:27.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:27.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:27.786 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:27.786 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:27.786 Removing: /var/run/dpdk/spdk1/config 00:24:27.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:27.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:27.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:27.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:27.786 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:27.786 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:27.786 Removing: /var/run/dpdk/spdk2/config 00:24:27.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:27.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:28.046 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:28.046 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:28.046 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:28.046 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:28.046 Removing: /var/run/dpdk/spdk3/config 00:24:28.046 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:28.046 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:28.046 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:28.046 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:28.046 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:28.046 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:28.046 Removing: /var/run/dpdk/spdk4/config 00:24:28.046 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:28.046 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:28.046 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:28.046 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:28.046 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:28.046 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:28.046 Removing: /dev/shm/nvmf_trace.0 00:24:28.046 Removing: /dev/shm/spdk_tgt_trace.pid60584 00:24:28.046 Removing: /var/run/dpdk/spdk0 00:24:28.046 Removing: /var/run/dpdk/spdk1 00:24:28.046 Removing: /var/run/dpdk/spdk2 00:24:28.046 Removing: /var/run/dpdk/spdk3 00:24:28.046 Removing: /var/run/dpdk/spdk4 00:24:28.046 Removing: /var/run/dpdk/spdk_pid100022 00:24:28.046 Removing: /var/run/dpdk/spdk_pid60439 00:24:28.046 Removing: /var/run/dpdk/spdk_pid60584 00:24:28.046 Removing: /var/run/dpdk/spdk_pid60845 00:24:28.046 Removing: /var/run/dpdk/spdk_pid60932 00:24:28.046 Removing: /var/run/dpdk/spdk_pid60958 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61062 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61092 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61210 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61491 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61667 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61744 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61817 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61893 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61931 00:24:28.046 Removing: /var/run/dpdk/spdk_pid61961 00:24:28.046 Removing: /var/run/dpdk/spdk_pid62023 00:24:28.046 Removing: /var/run/dpdk/spdk_pid62113 00:24:28.046 Removing: /var/run/dpdk/spdk_pid62736 00:24:28.046 Removing: /var/run/dpdk/spdk_pid62789 00:24:28.046 Removing: /var/run/dpdk/spdk_pid62839 00:24:28.046 Removing: /var/run/dpdk/spdk_pid62867 00:24:28.046 Removing: /var/run/dpdk/spdk_pid62946 00:24:28.046 Removing: /var/run/dpdk/spdk_pid62974 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63053 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63081 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63127 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63157 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63209 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63239 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63385 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63415 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63490 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63546 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63570 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63629 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63658 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63692 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63727 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63760 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63796 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63825 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63859 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63894 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63923 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63963 00:24:28.046 Removing: /var/run/dpdk/spdk_pid63992 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64021 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64062 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64093 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64128 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64162 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64194 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64237 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64266 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64304 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64368 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64479 00:24:28.046 Removing: /var/run/dpdk/spdk_pid64868 00:24:28.046 Removing: /var/run/dpdk/spdk_pid68191 00:24:28.046 Removing: /var/run/dpdk/spdk_pid68519 00:24:28.046 Removing: /var/run/dpdk/spdk_pid70947 00:24:28.046 Removing: /var/run/dpdk/spdk_pid71304 00:24:28.046 Removing: /var/run/dpdk/spdk_pid71539 00:24:28.304 Removing: /var/run/dpdk/spdk_pid71586 00:24:28.304 Removing: /var/run/dpdk/spdk_pid72207 00:24:28.304 Removing: /var/run/dpdk/spdk_pid72624 00:24:28.304 Removing: /var/run/dpdk/spdk_pid72674 00:24:28.304 Removing: /var/run/dpdk/spdk_pid73035 00:24:28.304 Removing: /var/run/dpdk/spdk_pid73545 00:24:28.304 Removing: /var/run/dpdk/spdk_pid73991 00:24:28.304 Removing: /var/run/dpdk/spdk_pid74919 00:24:28.304 Removing: /var/run/dpdk/spdk_pid75897 00:24:28.304 Removing: /var/run/dpdk/spdk_pid76008 00:24:28.304 Removing: /var/run/dpdk/spdk_pid76076 00:24:28.304 Removing: /var/run/dpdk/spdk_pid77503 00:24:28.304 Removing: /var/run/dpdk/spdk_pid77728 00:24:28.304 Removing: /var/run/dpdk/spdk_pid82981 00:24:28.304 Removing: /var/run/dpdk/spdk_pid83414 00:24:28.304 Removing: /var/run/dpdk/spdk_pid83521 00:24:28.304 Removing: /var/run/dpdk/spdk_pid83650 00:24:28.304 Removing: /var/run/dpdk/spdk_pid83696 00:24:28.304 Removing: /var/run/dpdk/spdk_pid83740 00:24:28.304 Removing: /var/run/dpdk/spdk_pid83768 00:24:28.304 Removing: /var/run/dpdk/spdk_pid83898 00:24:28.304 Removing: /var/run/dpdk/spdk_pid84046 00:24:28.304 Removing: /var/run/dpdk/spdk_pid84305 00:24:28.304 Removing: /var/run/dpdk/spdk_pid84422 00:24:28.304 Removing: /var/run/dpdk/spdk_pid84664 00:24:28.304 Removing: /var/run/dpdk/spdk_pid84776 00:24:28.304 Removing: /var/run/dpdk/spdk_pid84896 00:24:28.304 Removing: /var/run/dpdk/spdk_pid85234 00:24:28.304 Removing: /var/run/dpdk/spdk_pid85649 00:24:28.304 Removing: /var/run/dpdk/spdk_pid85933 00:24:28.304 Removing: /var/run/dpdk/spdk_pid86422 00:24:28.304 Removing: /var/run/dpdk/spdk_pid86424 00:24:28.304 Removing: /var/run/dpdk/spdk_pid86752 00:24:28.304 Removing: /var/run/dpdk/spdk_pid86774 00:24:28.304 Removing: /var/run/dpdk/spdk_pid86788 00:24:28.304 Removing: /var/run/dpdk/spdk_pid86814 00:24:28.304 Removing: /var/run/dpdk/spdk_pid86825 00:24:28.304 Removing: /var/run/dpdk/spdk_pid87178 00:24:28.304 Removing: /var/run/dpdk/spdk_pid87221 00:24:28.304 Removing: /var/run/dpdk/spdk_pid87554 00:24:28.304 Removing: /var/run/dpdk/spdk_pid87792 00:24:28.304 Removing: /var/run/dpdk/spdk_pid88269 00:24:28.304 Removing: /var/run/dpdk/spdk_pid88839 00:24:28.304 Removing: /var/run/dpdk/spdk_pid90152 00:24:28.304 Removing: /var/run/dpdk/spdk_pid90731 00:24:28.304 Removing: /var/run/dpdk/spdk_pid90733 00:24:28.304 Removing: /var/run/dpdk/spdk_pid92646 00:24:28.304 Removing: /var/run/dpdk/spdk_pid92732 00:24:28.304 Removing: /var/run/dpdk/spdk_pid92803 00:24:28.304 Removing: /var/run/dpdk/spdk_pid92888 00:24:28.304 Removing: /var/run/dpdk/spdk_pid93032 00:24:28.304 Removing: /var/run/dpdk/spdk_pid93103 00:24:28.304 Removing: /var/run/dpdk/spdk_pid93188 00:24:28.304 Removing: /var/run/dpdk/spdk_pid93277 00:24:28.304 Removing: /var/run/dpdk/spdk_pid93587 00:24:28.304 Removing: /var/run/dpdk/spdk_pid94262 00:24:28.304 Removing: /var/run/dpdk/spdk_pid95590 00:24:28.304 Removing: /var/run/dpdk/spdk_pid95790 00:24:28.304 Removing: /var/run/dpdk/spdk_pid96062 00:24:28.304 Removing: /var/run/dpdk/spdk_pid96365 00:24:28.304 Removing: /var/run/dpdk/spdk_pid96881 00:24:28.304 Removing: /var/run/dpdk/spdk_pid96892 00:24:28.304 Removing: /var/run/dpdk/spdk_pid97235 00:24:28.304 Removing: /var/run/dpdk/spdk_pid97390 00:24:28.304 Removing: /var/run/dpdk/spdk_pid97548 00:24:28.304 Removing: /var/run/dpdk/spdk_pid97644 00:24:28.304 Removing: /var/run/dpdk/spdk_pid97800 00:24:28.304 Removing: /var/run/dpdk/spdk_pid97909 00:24:28.304 Removing: /var/run/dpdk/spdk_pid98560 00:24:28.304 Removing: /var/run/dpdk/spdk_pid98594 00:24:28.304 Removing: /var/run/dpdk/spdk_pid98631 00:24:28.304 Removing: /var/run/dpdk/spdk_pid98884 00:24:28.304 Removing: /var/run/dpdk/spdk_pid98919 00:24:28.304 Removing: /var/run/dpdk/spdk_pid98949 00:24:28.304 Removing: /var/run/dpdk/spdk_pid99371 00:24:28.304 Removing: /var/run/dpdk/spdk_pid99406 00:24:28.304 Removing: /var/run/dpdk/spdk_pid99864 00:24:28.304 Removing: /var/run/dpdk/spdk_pid99999 00:24:28.304 Clean 00:24:28.562 15:47:23 -- common/autotest_common.sh@1451 -- # return 0 00:24:28.562 15:47:23 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:24:28.562 15:47:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.562 15:47:23 -- common/autotest_common.sh@10 -- # set +x 00:24:28.562 15:47:23 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:24:28.562 15:47:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.562 15:47:23 -- common/autotest_common.sh@10 -- # set +x 00:24:28.562 15:47:23 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:28.562 15:47:23 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:28.562 15:47:23 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:28.562 15:47:23 -- spdk/autotest.sh@391 -- # hash lcov 00:24:28.562 15:47:23 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:28.562 15:47:23 -- spdk/autotest.sh@393 -- # hostname 00:24:28.562 15:47:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:28.821 geninfo: WARNING: invalid characters removed from testname! 00:24:55.360 15:47:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:59.543 15:47:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:02.827 15:47:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:05.439 15:48:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:08.722 15:48:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:11.252 15:48:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:14.536 15:48:09 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:14.536 15:48:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:14.536 15:48:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:14.536 15:48:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.536 15:48:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.536 15:48:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.536 15:48:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.536 15:48:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.536 15:48:09 -- paths/export.sh@5 -- $ export PATH 00:25:14.536 15:48:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.536 15:48:09 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:14.536 15:48:09 -- common/autobuild_common.sh@444 -- $ date +%s 00:25:14.536 15:48:09 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721058489.XXXXXX 00:25:14.536 15:48:09 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721058489.UfsEcY 00:25:14.536 15:48:09 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:25:14.537 15:48:09 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:25:14.537 15:48:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:14.537 15:48:09 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:14.537 15:48:09 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:14.537 15:48:09 -- common/autobuild_common.sh@460 -- $ get_config_params 00:25:14.537 15:48:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:25:14.537 15:48:09 -- common/autotest_common.sh@10 -- $ set +x 00:25:14.537 15:48:09 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:25:14.537 15:48:09 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:25:14.537 15:48:09 -- pm/common@17 -- $ local monitor 00:25:14.537 15:48:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:14.537 15:48:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:14.537 15:48:09 -- pm/common@25 -- $ sleep 1 00:25:14.537 15:48:09 -- pm/common@21 -- $ date +%s 00:25:14.537 15:48:09 -- pm/common@21 -- $ date +%s 00:25:14.537 15:48:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721058489 00:25:14.537 15:48:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721058489 00:25:14.537 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721058489_collect-vmstat.pm.log 00:25:14.537 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721058489_collect-cpu-load.pm.log 00:25:15.105 15:48:10 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:25:15.105 15:48:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:15.105 15:48:10 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:15.105 15:48:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:15.105 15:48:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:15.105 15:48:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:15.105 15:48:10 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:15.105 15:48:10 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:15.105 15:48:10 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:15.105 15:48:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:15.105 15:48:10 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:15.105 15:48:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:15.105 15:48:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:15.105 15:48:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:15.105 15:48:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:15.105 15:48:10 -- pm/common@44 -- $ pid=101721 00:25:15.105 15:48:10 -- pm/common@50 -- $ kill -TERM 101721 00:25:15.105 15:48:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:15.105 15:48:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:15.105 15:48:10 -- pm/common@44 -- $ pid=101723 00:25:15.105 15:48:10 -- pm/common@50 -- $ kill -TERM 101723 00:25:15.105 + [[ -n 5143 ]] 00:25:15.105 + sudo kill 5143 00:25:15.115 [Pipeline] } 00:25:15.137 [Pipeline] // timeout 00:25:15.143 [Pipeline] } 00:25:15.166 [Pipeline] // stage 00:25:15.172 [Pipeline] } 00:25:15.187 [Pipeline] // catchError 00:25:15.197 [Pipeline] stage 00:25:15.200 [Pipeline] { (Stop VM) 00:25:15.214 [Pipeline] sh 00:25:15.491 + vagrant halt 00:25:19.678 ==> default: Halting domain... 00:25:26.252 [Pipeline] sh 00:25:26.557 + vagrant destroy -f 00:25:30.769 ==> default: Removing domain... 00:25:30.782 [Pipeline] sh 00:25:31.064 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:25:31.072 [Pipeline] } 00:25:31.093 [Pipeline] // stage 00:25:31.101 [Pipeline] } 00:25:31.125 [Pipeline] // dir 00:25:31.132 [Pipeline] } 00:25:31.154 [Pipeline] // wrap 00:25:31.163 [Pipeline] } 00:25:31.184 [Pipeline] // catchError 00:25:31.195 [Pipeline] stage 00:25:31.198 [Pipeline] { (Epilogue) 00:25:31.216 [Pipeline] sh 00:25:31.498 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:38.069 [Pipeline] catchError 00:25:38.071 [Pipeline] { 00:25:38.086 [Pipeline] sh 00:25:38.365 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:38.623 Artifacts sizes are good 00:25:38.632 [Pipeline] } 00:25:38.651 [Pipeline] // catchError 00:25:38.662 [Pipeline] archiveArtifacts 00:25:38.670 Archiving artifacts 00:25:38.854 [Pipeline] cleanWs 00:25:38.865 [WS-CLEANUP] Deleting project workspace... 00:25:38.865 [WS-CLEANUP] Deferred wipeout is used... 00:25:38.871 [WS-CLEANUP] done 00:25:38.874 [Pipeline] } 00:25:38.894 [Pipeline] // stage 00:25:38.901 [Pipeline] } 00:25:38.918 [Pipeline] // node 00:25:38.924 [Pipeline] End of Pipeline 00:25:38.960 Finished: SUCCESS